Skip to content

Commit

Permalink
kbl-v0.1.1 (#2493)
Browse files Browse the repository at this point in the history
* release kbl-v0.1

* fix linting

* remove rag tasks as  doc_to_text functions cause trouble

* remove remaining rag tasks

* remove unnecessary repeat in yaml files and rag dataset in hf-hub

* remove unncessary newline; introduce cfg files in lbox/kbl in hf

* Make task yaml files consistent to hf-datasets-config

* Make task yaml files consistent to hf-datasets-config

* Remove trailing empty space in doc-to-text

* Remove unncessary yaml file

* Fix task nameing error

* trailing space removed
  • Loading branch information
whwang299 authored Nov 16, 2024
1 parent badf273 commit cbc31eb
Show file tree
Hide file tree
Showing 72 changed files with 516 additions and 0 deletions.
1 change: 1 addition & 0 deletions lm_eval/tasks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,7 @@
| [ifeval](ifeval/README.md) | Interactive fiction evaluation tasks for narrative understanding and reasoning. | English |
| [inverse_scaling](inverse_scaling/README.md) | Multiple-choice tasks from the Inverse Scaling Prize, designed to find settings where larger language models perform worse. | English |
| [japanese_leaderboard](japanese_leaderboard/README.md) | Japanese language understanding tasks to benchmark model performance on various linguistic aspects. | Japanese |
| [kbl](kbl/README.md) | Korean Benchmark for Legal Language Understanding. | Korean |
| [kmmlu](kmmlu/README.md) | Knowledge-based multi-subject multiple choice questions for academic evaluation. | Korean |
| [kobest](kobest/README.md) | A collection of tasks designed to evaluate understanding in Korean language. | Korean |
| [kormedmcqa](kormedmcqa/README.md) | Medical question answering tasks in Korean to test specialized domain knowledge. | Korean |
Expand Down
127 changes: 127 additions & 0 deletions lm_eval/tasks/kbl/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,127 @@
# kbl

### Paper

Title: `Developing a Pragmatic Benchmark for Assessing Korean Legal Language Understanding in Large Language Models`

Abstract: `Large language models (LLMs) have demonstrated remarkable performance in the legal domain, with GPT-4 even passing the Uniform Bar Exam in the U.S. However their efficacy remains limited for non-standardized tasks and tasks in languages other than English. This underscores the need for careful evaluation of LLMs within each legal system before application. Here, we introduce KBL, a benchmark for assessing the Korean legal language understanding of LLMs, consisting of (1) 7 legal knowledge tasks (510 examples), (2) 4 legal reasoning tasks (288 examples), and (3) the Korean bar exam (4 domains, 53 tasks, 2,510 examples). First two datasets were developed in close collaboration with lawyers to evaluate LLMs in practical scenarios in a certified manner. Furthermore, considering legal practitioners' frequent use of extensive legal documents for research, we assess LLMs in both a closed book setting, where they rely solely on internal knowledge, and a retrieval-augmented generation (RAG) setting, using a corpus of Korean statutes and precedents. The results indicate substantial room and opportunities for improvement.`

`Korean Benchmark for Legal Language Understanding`

Homepage: `https://github.com/lbox-kr/kbl`


### Citation

```
@inproceedings{kim2024kbl,
title = "Developing a Pragmatic Benchmark for Assessing {K}orean Legal Language Understanding in Large Language Models",
author = {Yeeun Kim and Young Rok Choi and Eunkyung Choi and Jinhwan Choi and Hai Jin Park and Wonseok Hwang},
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2024",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.findings-emnlp.319",
pages = "5573--5595",
}
```

### Groups, Tags, and Tasks

#### Groups

#### Tags

* `kbl`: `All kbl tasks (7 knowledge, 4 reasoning, and 39 bar exam)`
* `kbl_knowledge_em`: `7 knowledge tasks`
* `kbl_reasoning_em`: `4 reasoning tasks`
* `kbl_bar_exam_em`: `53 bar exam tasks`
* `kbl_bar_exam_em_civil`: `13 bar exam tasks, civil law`
* `kbl_bar_exam_em_criminal`: `13 bar exam tasks, criminal law`
* `kbl_bar_exam_em_public`: `13 bar exam tasks, public law`
* `kbl_bar_exam_em_responsibility`: `14 bar exam tasks, professional responsibility (RESP) examination`


#### Tasks

* `kbl_common_legal_mistake_qa_em`: `A QA task evaluating common legal misconceptions from the general public.`
* `kbl_knowledge_common_legal_mistake_qa_reasoning`: `Similar to 'kbl_common_legal_mistake_qa_em' but the answers are presented with correct/wrong rationals.`
* `kbl_knowledge_legal_concept_qa`: `A QA task addressing knowledge about complex legal concepts (legal terms).`
* `kbl_knowledge_offense_component_qa`: `A QA task evaluating whether a model knows specific actions meet the actual elements of a criminal offense.`
* `kbl_knowledge_query_and_statute_matching_qa`: `A QA task assessing whether the language model can accurately identify the relevant statute for a given query.`
* `kbl_knowledge_statute_hallucination_qa`: `A QA task evaluating whether a model can select the correct answer consists of a pair of (fictitious) statute and corresponding reasoning for given confusing legal questions.`
* `kbl_knowledge_statute_number_and_content_matching_qa`: `A QA dataset for evaluating where a model can accurately match the content of a law to its specific statute number.`
* `kbl_reasoning_case_relevance_qa_p`: `A QA task where a model needs to determine whether a given precedent is relavent to an input precedent.`
* `kbl_reasoning_case_relevance_qa_q`: `A QA task where a model needs to determine whether a given precedent is relavent to an input query.`
* `kbl_reasoning_causal_reasoning_qa`: `A QA task where a model needs to assess whether the defendant’s actions were the direct and decisive cause of the victim’s injury or death for each given factual description and claims.`
* `kbl_reasoning_statement_consistency_qa`: `A QA task where a model is required to accurately determine whether two presented statements are consistent with each other.`
* `bar_exam_civil_2012`: `Korean bar exam multiple-choice questions, civil law`
* `bar_exam_civil_2013`: `Korean bar exam multiple-choice questions, civil law`
* `bar_exam_civil_2014`: `Korean bar exam multiple-choice questions, civil law`
* `bar_exam_civil_2015`: `Korean bar exam multiple-choice questions, civil law`
* `bar_exam_civil_2016`: `Korean bar exam multiple-choice questions, civil law`
* `bar_exam_civil_2017`: `Korean bar exam multiple-choice questions, civil law`
* `bar_exam_civil_2018`: `Korean bar exam multiple-choice questions, civil law`
* `bar_exam_civil_2019`: `Korean bar exam multiple-choice questions, civil law`
* `bar_exam_civil_2020`: `Korean bar exam multiple-choice questions, civil law`
* `bar_exam_civil_2021`: `Korean bar exam multiple-choice questions, civil law`
* `bar_exam_civil_2022`: `Korean bar exam multiple-choice questions, civil law`
* `bar_exam_civil_2023`: `Korean bar exam multiple-choice questions, civil law`
* `bar_exam_civil_2024`: `Korean bar exam multiple-choice questions, civil law`
* `bar_exam_criminal_2012`: `Korean bar exam multiple-choice questions, criminal law`
* `bar_exam_criminal_2013`: `Korean bar exam multiple-choice questions, criminal law`
* `bar_exam_criminal_2014`: `Korean bar exam multiple-choice questions, criminal law`
* `bar_exam_criminal_2015`: `Korean bar exam multiple-choice questions, criminal law`
* `bar_exam_criminal_2016`: `Korean bar exam multiple-choice questions, criminal law`
* `bar_exam_criminal_2017`: `Korean bar exam multiple-choice questions, criminal law`
* `bar_exam_criminal_2018`: `Korean bar exam multiple-choice questions, criminal law`
* `bar_exam_criminal_2019`: `Korean bar exam multiple-choice questions, criminal law`
* `bar_exam_criminal_2020`: `Korean bar exam multiple-choice questions, criminal law`
* `bar_exam_criminal_2021`: `Korean bar exam multiple-choice questions, criminal law`
* `bar_exam_criminal_2022`: `Korean bar exam multiple-choice questions, criminal law`
* `bar_exam_criminal_2023`: `Korean bar exam multiple-choice questions, criminal law`
* `bar_exam_criminal_2024`: `Korean bar exam multiple-choice questions, criminal law`
* `bar_exam_public_2012`: `Korean bar exam multiple-choice questions, public law`
* `bar_exam_public_2013`: `Korean bar exam multiple-choice questions, public law`
* `bar_exam_public_2014`: `Korean bar exam multiple-choice questions, public law`
* `bar_exam_public_2015`: `Korean bar exam multiple-choice questions, public law`
* `bar_exam_public_2016`: `Korean bar exam multiple-choice questions, public law`
* `bar_exam_public_2017`: `Korean bar exam multiple-choice questions, public law`
* `bar_exam_public_2018`: `Korean bar exam multiple-choice questions, public law`
* `bar_exam_public_2019`: `Korean bar exam multiple-choice questions, public law`
* `bar_exam_public_2020`: `Korean bar exam multiple-choice questions, public law`
* `bar_exam_public_2021`: `Korean bar exam multiple-choice questions, public law`
* `bar_exam_public_2022`: `Korean bar exam multiple-choice questions, public law`
* `bar_exam_public_2023`: `Korean bar exam multiple-choice questions, public law`
* `bar_exam_public_2024`: `Korean bar exam multiple-choice questions, public law`
* `bar_exam_responsibility_2010`: `Korean bar exam multiple-choice questions, professional responsibility (RESP) examination`
* `bar_exam_responsibility_2011`: `Korean bar exam multiple-choice questions, professional responsibility (RESP) examination`
* `bar_exam_responsibility_2012`: `Korean bar exam multiple-choice questions, professional responsibility (RESP) examination`
* `bar_exam_responsibility_2013`: `Korean bar exam multiple-choice questions, professional responsibility (RESP) examination`
* `bar_exam_responsibility_2014`: `Korean bar exam multiple-choice questions, professional responsibility (RESP) examination`
* `bar_exam_responsibility_2015`: `Korean bar exam multiple-choice questions, professional responsibility (RESP) examination`
* `bar_exam_responsibility_2016`: `Korean bar exam multiple-choice questions, professional responsibility (RESP) examination`
* `bar_exam_responsibility_2017`: `Korean bar exam multiple-choice questions, professional responsibility (RESP) examination`
* `bar_exam_responsibility_2018`: `Korean bar exam multiple-choice questions, professional responsibility (RESP) examination`
* `bar_exam_responsibility_2019`: `Korean bar exam multiple-choice questions, professional responsibility (RESP) examination`
* `bar_exam_responsibility_2020`: `Korean bar exam multiple-choice questions, professional responsibility (RESP) examination`
* `bar_exam_responsibility_2021`: `Korean bar exam multiple-choice questions, professional responsibility (RESP) examination`
* `bar_exam_responsibility_2022`: `Korean bar exam multiple-choice questions, professional responsibility (RESP) examination`
* `bar_exam_responsibility_2023`: `Korean bar exam multiple-choice questions, professional responsibility (RESP) examination`

### Checklist

For adding novel benchmarks/datasets to the library:
* [ ] Is the task an existing benchmark in the literature?
* [ ] Have you referenced the original paper that introduced the task?
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?


If other tasks on this dataset are already supported:
* [ ] Is the "Main" variant of this task clearly denoted?
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
36 changes: 36 additions & 0 deletions lm_eval/tasks/kbl/bar_exam/civil/_base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
tag:
- kbl
- kbl_bar_exam_em
- kbl_bar_exam_em_civil
description: '당신은 사용자의 질문에 친절하고 논리적으로 답변해 주는 법률 전문가 챗봇 입니다.\n'
dataset_path: lbox/kbl
test_split: test
output_type: generate_until
doc_to_text: '### 질문: {{question}}

다음 각 선택지를 읽고 A, B, C, D, E 중 하나를 선택하여 ''답변: A'' 와 같이 단답식으로 답해 주세요.

A. {{A}}

B. {{B}}

C. {{C}}

D. {{D}}

E. {{E}}

### 답변:'
doc_to_target: gt
metric_list:
- metric: exact_match
aggregation: mean
higher_is_better: true
ignore_case: true
ignore_punctuation: true
filter_list:
- name: get-answer
filter:
- function: regex
regex_pattern: ([A-E]).*
- function: take_first
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_civil_2012
dataset_name: bar_exam_civil_2012
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_civil_2013
dataset_name: bar_exam_civil_2013
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_civil_2014
dataset_name: bar_exam_civil_2014
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_civil_2015
dataset_name: bar_exam_civil_2015
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_civil_2016
dataset_name: bar_exam_civil_2016
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_civil_2017
dataset_name: bar_exam_civil_2017
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_civil_2018
dataset_name: bar_exam_civil_2018
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_civil_2019
dataset_name: bar_exam_civil_2019
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_civil_2020
dataset_name: bar_exam_civil_2020
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_civil_2021
dataset_name: bar_exam_civil_2021
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_civil_2022
dataset_name: bar_exam_civil_2022
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_civil_2023
dataset_name: bar_exam_civil_2023
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_civil_2024
dataset_name: bar_exam_civil_2024
include: _base_em_yaml
36 changes: 36 additions & 0 deletions lm_eval/tasks/kbl/bar_exam/criminal/_base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
tag:
- kbl
- kbl_bar_exam_em
- kbl_bar_exam_em_criminal
description: '당신은 사용자의 질문에 친절하고 논리적으로 답변해 주는 법률 전문가 챗봇 입니다.\n'
dataset_path: lbox/kbl
test_split: test
output_type: generate_until
doc_to_text: '### 질문: {{question}}

다음 각 선택지를 읽고 A, B, C, D, E 중 하나를 선택하여 ''답변: A'' 와 같이 단답식으로 답해 주세요.

A. {{A}}

B. {{B}}

C. {{C}}

D. {{D}}

E. {{E}}

### 답변:'
doc_to_target: gt
metric_list:
- metric: exact_match
aggregation: mean
higher_is_better: true
ignore_case: true
ignore_punctuation: true
filter_list:
- name: get-answer
filter:
- function: regex
regex_pattern: ([A-E]).*
- function: take_first
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_criminal_2012
dataset_name: bar_exam_criminal_2012
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_criminal_2013
dataset_name: bar_exam_criminal_2013
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_criminal_2014
dataset_name: bar_exam_criminal_2014
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_criminal_2015
dataset_name: bar_exam_criminal_2015
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_criminal_2016
dataset_name: bar_exam_criminal_2016
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_criminal_2017
dataset_name: bar_exam_criminal_2017
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_criminal_2018
dataset_name: bar_exam_criminal_2018
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_criminal_2019
dataset_name: bar_exam_criminal_2019
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_criminal_2020
dataset_name: bar_exam_criminal_2020
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_criminal_2021
dataset_name: bar_exam_criminal_2021
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_criminal_2022
dataset_name: bar_exam_criminal_2022
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_criminal_2023
dataset_name: bar_exam_criminal_2023
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_criminal_2024
dataset_name: bar_exam_criminal_2024
include: _base_em_yaml
36 changes: 36 additions & 0 deletions lm_eval/tasks/kbl/bar_exam/public/_base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
tag:
- kbl
- kbl_bar_exam_em
- kbl_bar_exam_em_public
description: '당신은 사용자의 질문에 친절하고 논리적으로 답변해 주는 법률 전문가 챗봇 입니다.\n'
dataset_path: lbox/kbl
test_split: test
output_type: generate_until
doc_to_text: '### 질문: {{question}}

다음 각 선택지를 읽고 A, B, C, D, E 중 하나를 선택하여 ''답변: A'' 와 같이 단답식으로 답해 주세요.

A. {{A}}

B. {{B}}

C. {{C}}

D. {{D}}

E. {{E}}

### 답변:'
doc_to_target: gt
metric_list:
- metric: exact_match
aggregation: mean
higher_is_better: true
ignore_case: true
ignore_punctuation: true
filter_list:
- name: get-answer
filter:
- function: regex
regex_pattern: ([A-E]).*
- function: take_first
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_public_2012
dataset_name: bar_exam_public_2012
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_public_2013
dataset_name: bar_exam_public_2013
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_public_2014
dataset_name: bar_exam_public_2014
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_public_2015
dataset_name: bar_exam_public_2015
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_public_2016
dataset_name: bar_exam_public_2016
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_public_2017
dataset_name: bar_exam_public_2017
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_public_2018
dataset_name: bar_exam_public_2018
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_public_2019
dataset_name: bar_exam_public_2019
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_public_2020
dataset_name: bar_exam_public_2020
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_public_2021
dataset_name: bar_exam_public_2021
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_public_2022
dataset_name: bar_exam_public_2022
include: _base_em_yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
task: kbl_bar_exam_em_public_2023
dataset_name: bar_exam_public_2023
include: _base_em_yaml
Loading

0 comments on commit cbc31eb

Please sign in to comment.