concept_set
stringclasses 5
values | 1
stringclasses 5
values | 2
stringclasses 5
values | 3
stringclasses 5
values | 4
stringclasses 5
values | gold
int64 1
4
|
---|---|---|---|---|---|
λ#κ΅νμ #λ΄μ©#μ£Όμ #κ°μ°#νλ€ | λλ κ΅νμ μΈ λ΄μ©μ΄ κ°μ°μ νλ€. | λλ κ΅νμ μΈ λ΄μ©μ μ£Όμ λ‘ κ°μ°μ νμ§ μμλ€. | λλ κ΅νμ μΈ λ΄μ©μ΄ μ£Όμ λλ¬Έμ κ°μ°μ νμ΄. | κ΅νμ μΈ λ΄μ©μ΄ λλ₯Ό κ°μ°μκ² νλ€. | 2 |
μ³λ€#κ·Έλ₯΄λ€#νλ¨νλ€ | λλ μ³κ³ κ·Έλ¦μ κ°κ΄μ μΌλ‘ νλ¨νμ§ μμλ€. | λλ μ³κ³ κ·Έλ¦μ κ°κ΄μ μΌλ‘ μ ν΄μΌ νλ€. | μ³κ³ κ·Έλ¦μ νλ¨νκ² κ°κ΄μ μ΄μλ€. | μ³κ³ κ·Έλ¦μ΄ λκ° νλ¨νκ² μ λΉνκ² λ΄€λ€. | 1 |
μλ―Ό#λ¨μ²΄#νκ²½#μ μ±
#λ°λνλ€#νκ΄΄νλ€ | μλ―Ό λ¨μ²΄λ νκ²½μ νκ΄΄νλ μ μ±
μ λ°λνλ μ² νλ€. | νκ²½μ νκ΄΄νλ μ μ±
μ΄ κ°λ ₯ν λ°λνλ€. | νκ²½μ νκ΄΄νλ μ μ±
μ μλ―Ό λ¨μ²΄μ κ°λ ₯ν λ°λ°μ λΆλͺνλ€. | μλ―Ό λ¨μ²΄ νκ²½μ νκ΄΄νλ μ μ±
μ μ κ·Ή λ°λλ§νκ³ μ€μ²μ νμ§ μλλ€. | 3 |
λ§μ§λ€#λ€λλΌ#λλ½λ€ | λ€λλΌκ° μλ μ¬λμ΄ λ§μ§λ©΄ λλ½λ€. | λλ¬μ΄ μ¬λμ λ€λλΌκ° μμΌλκΉ λ§μ§λ©΄ μλλ€. | λ€λλΌκ° μκΈ°λ©΄ νμ΄ λ¬»μ μμΌλ‘ λμ λ§μ§λ©΄ λλ€. | λλ¬μ΄ μμΌλ‘ λ§μ§λ©΄ λ€λλΌκ° μ겨λ²λ¦΄κ±°μΌ. | 4 |
κ·Έ#μμ #μλ#λͺ
λ Ήνλ€#κ²°μ νλ€#μ§ν | μλμ₯μΈ κ·Έλ μμ μ λͺ
λ Ήνμ§λ μκ³ μλ μ§νλ₯Ό κ²°μ νμ§λ μμλ€. | μλμ₯μΈ κ·Έλ μμ μ λͺ
λ Ήνκ³ κ²°μ νλ€. | κ·Έλ μμ μ΄ λͺ
λ Ήνκ³ , μλ μ§νλ₯Ό κ²°μ ν΄μΌ ν¨μ κΉ¨λ«μλ€. | μμ μ λͺ
λ Ήνλ©΄μ μλ κ·Έμκ² μ§νκ° κ²°μ λμλ€. | 1 |
π KoCommonGEN v2
KoCommonGEN v2: A Benchmark for Navigating Korean Commonsense Reasoning Challenges in Large Language Models (ACL 2024-Findings)
Jaehyung Seo, Jaewook Lee, Chanjun Park, SeongTae Hong, Seungjun Lee and Heuiseok Lim
π« NLP & AI Lab, Korea University
π₯ News
- September 27, 2023: Provided data support for the Open Ko-LLM Leaderboard
- August 7, 2024: Dataset Release
- August 10, 2024: Experimental Results for the New Models Added
- August 14, 2024: Presented a research paper at ACL 2024
π₯ Human Evaluation
We recruited 22 native Korean speaking volunteers as human evaluators and paid them $0.8 per question.
Model | # | Average Score | cohen's kappa | Krippendorff's alpha |
---|---|---|---|---|
Human | 22 | 0.8395 | 0.7693 | 0.7706 |
π€ Models (August 10, 2024)
The results of 2-shot evaluation of the newly released models.
Model | Size | Acc_norm | Stderr | Link |
---|---|---|---|---|
GPT-4 (June 13, 2023) | 0.7450 | |||
Mistral-Nemo-Instruct | 12B | 0.6612 | 0.0163 | π |
Mistral-Nemo-Base | 12B | 0.6340 | 0.0166 | π |
Meta-Llama-3.1-8B | 8B | 0.6246 | 0.0166 | π |
QWEN2-7B base | 7B | 0.6187 | 0.0167 | π |
EXAONE-3.0-7.8B-Instruct | 7.8B | 0.6088 | 0.0168 | π |
MLP-KTLim-Bllossom-8B | 8B | 0.6057 | 0.0168 | π |
Meta-Llama-3.1-8B-Instruct | 8B | 0.6057 | 0.0168 | π |
KULLM3 | 10.8B | 0.6033 | 0.0168 | π |
QWEN2-7B inst | 7B | 0.5832 | 0.017 | π |
Gemma-2-9b-it | 9B | 0.5714 | 0.0170 | π |
Aya-23-8B | 8B | 0.5159 | 0.0172 | π |
Allganize-Alpha-Instruct | 8B | 0.4970 | 0.0172 | π |
As mentioned in the paper, it is possible to evaluate various models.
π°π·πΊπΈπ―π΅π¨π³πͺπΈ Code-switching
The dataset can be found on Hugging Face at: nlpai-lab/ko_commongen_v2_code_switching
This dataset contains code-switching data for the following languages:
- Korean (korean)
- English (english)
- Japanese (japan)
- Chinese (china)
- Spanish (espanol)
(The code-switching data relies on machine translation, which may result in some inaccuracies.)
π Citation
@inproceedings{seo2024Kocommongenv2,
title = "KoCommonGEN v2: A Benchmark for Navigating Korean Commonsense Reasoning Challenges in Large Language Models",
author = "Jaehyung Seo and Jaewook Lee and Chanjun Park and SeongTae Hong and Seungjun Lee and Heuiseok Lim",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2024",
month = August,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "TBD",
doi = "TBD",
pages = "TBD"}
π¨ Warning!
This dataset contains some instances of toxic speech.
π Acknowledgement
We sincerely appreciate the dedication of Chanjun Park, Sanghoon Kim and Sunghun Kim (Sung Kim) from Upstage AI in managing one of the benchmark datasets for the Open Ko-LLM LeaderBoard.
- Downloads last month
- 36