Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Korean
Size:
< 1K
Libraries:
Datasets
pandas
concept_set
stringclasses
5 values
1
stringclasses
5 values
2
stringclasses
5 values
3
stringclasses
5 values
4
stringclasses
5 values
gold
int64
1
4
λ‚˜#κ΅ν›ˆμ #λ‚΄μš©#주제#κ°•μ—°#ν•˜λ‹€
λ‚˜λŠ” κ΅ν›ˆμ μΈ λ‚΄μš©μ΄ 강연을 ν•˜λ‹€.
λ‚˜λŠ” κ΅ν›ˆμ μΈ λ‚΄μš©μ„ 주제둜 강연을 ν•˜μ§€ μ•Šμ•˜λ‹€.
λ‚˜λŠ” κ΅ν›ˆμ μΈ λ‚΄μš©μ΄ 주제 λ•Œλ¬Έμ— 강연을 ν–ˆμ–΄.
κ΅ν›ˆμ μΈ λ‚΄μš©μ΄ λ‚˜λ₯Ό κ°•μ—°μ—κ²Œ ν•˜λ‹€.
2
μ˜³λ‹€#κ·Έλ₯΄λ‹€#νŒλ‹¨ν•˜λ‹€
λ‚˜λŠ” 옳고 그름을 κ°κ΄€μ μœΌλ‘œ νŒλ‹¨ν•˜μ§€ μ•Šμ•˜λ‹€.
λ‚˜λŠ” 옳고 그름을 κ°κ΄€μ μœΌλ‘œ μ•ˆ ν•΄μ•Ό ν–ˆλ‹€.
옳고 그름을 νŒλ‹¨ν•˜κ²Œ κ°κ΄€μ μ΄μ—ˆλ‹€.
옳고 그름이 λˆ„κ°€ νŒλ‹¨ν•˜κ²Œ μ •λ‹Ήν•˜κ²Œ λ΄€λ‹€.
1
μ‹œλ―Ό#단체#ν™˜κ²½#μ •μ±…#λ°˜λŒ€ν•˜λ‹€#νŒŒκ΄΄ν•˜λ‹€
μ‹œλ―Ό λ‹¨μ²΄λŠ” ν™˜κ²½μ„ νŒŒκ΄΄ν•˜λŠ” 정책을 λ°˜λŒ€ν•˜λŠ” μ²™ ν•œλ‹€.
ν™˜κ²½μ„ νŒŒκ΄΄ν•˜λŠ” 정책이 κ°•λ ₯히 λ°˜λŒ€ν•˜λ‹€.
ν™˜κ²½μ„ νŒŒκ΄΄ν•˜λŠ” 정책은 μ‹œλ―Ό λ‹¨μ²΄μ˜ κ°•λ ₯ν•œ λ°˜λ°œμ— λΆ€λ”ͺνžŒλ‹€.
μ‹œλ―Ό 단체 ν™˜κ²½μ„ νŒŒκ΄΄ν•˜λŠ” 정책을 적극 λ°˜λŒ€λ§Œν•˜κ³  μ‹€μ²œμ€ ν•˜μ§€ μ•ŠλŠ”λ‹€.
3
λ§Œμ§€λ‹€#λ‹€λž˜λΌ#λ”λŸ½λ‹€
λ‹€λž˜λΌκ°€ μžˆλŠ” μ‚¬λžŒμ΄ λ§Œμ§€λ©΄ λ”λŸ½λ‹€.
λ”λŸ¬μš΄ μ‚¬λžŒμ€ λ‹€λž˜λΌκ°€ μžˆμœΌλ‹ˆκΉŒ λ§Œμ§€λ©΄ μ•ˆλœλ‹€.
λ‹€λž˜λΌκ°€ 생기면 흙이 묻은 μ†μœΌλ‘œ λˆˆμ„ λ§Œμ§€λ©΄ λœλ‹€.
λ”λŸ¬μš΄ μ†μœΌλ‘œ λ§Œμ§€λ©΄ λ‹€λž˜λΌκ°€ 생겨버릴거야.
4
κ·Έ#μž‘μ „#μ†ŒλŒ€#λͺ…λ Ήν•˜λ‹€#κ²°μ •ν•˜λ‹€#μ§€νœ˜
μ†ŒλŒ€μž₯인 κ·ΈλŠ” μž‘μ „μ„ λͺ…λ Ήν•˜μ§€λ„ μ•Šκ³  μ†ŒλŒ€ μ§€νœ˜λ₯Ό κ²°μ •ν•˜μ§€λ„ μ•Šμ•˜λ‹€.
μ†ŒλŒ€μž₯인 κ·ΈλŠ” μž‘μ „μ„ λͺ…λ Ήν•˜κ³  κ²°μ •ν–ˆλ‹€.
κ·ΈλŠ” μž‘μ „μ΄ λͺ…λ Ήν•˜κ³ , μ†ŒλŒ€ μ§€νœ˜λ₯Ό κ²°μ •ν•΄μ•Ό 함을 κΉ¨λ‹«μ•˜λ‹€.
μž‘μ „μ„ λͺ…λ Ήν•˜λ©΄μ„œ μ†ŒλŒ€ κ·Έμ—κ²Œ μ§€νœ˜κ°€ κ²°μ •λ˜μ—ˆλ‹€.
1

🌠 KoCommonGEN v2

KoCommonGEN v2: A Benchmark for Navigating Korean Commonsense Reasoning Challenges in Large Language Models (ACL 2024-Findings)

Jaehyung Seo, Jaewook Lee, Chanjun Park, SeongTae Hong, Seungjun Lee and Heuiseok Lim

🏫 NLP & AI Lab, Korea University


πŸ”₯ News

  • September 27, 2023: Provided data support for the Open Ko-LLM Leaderboard
  • August 7, 2024: Dataset Release
  • August 10, 2024: Experimental Results for the New Models Added
  • August 14, 2024: Presented a research paper at ACL 2024

πŸ‘₯ Human Evaluation

We recruited 22 native Korean speaking volunteers as human evaluators and paid them $0.8 per question.

Model # Average Score cohen's kappa Krippendorff's alpha
Human 22 0.8395 0.7693 0.7706

πŸ€– Models (August 10, 2024)

The results of 2-shot evaluation of the newly released models.

Model Size Acc_norm Stderr Link
GPT-4 (June 13, 2023) 0.7450
Mistral-Nemo-Instruct 12B 0.6612 0.0163 πŸ”—
Mistral-Nemo-Base 12B 0.6340 0.0166 πŸ”—
Meta-Llama-3.1-8B 8B 0.6246 0.0166 πŸ”—
QWEN2-7B base 7B 0.6187 0.0167 πŸ”—
EXAONE-3.0-7.8B-Instruct 7.8B 0.6088 0.0168 πŸ”—
MLP-KTLim-Bllossom-8B 8B 0.6057 0.0168 πŸ”—
Meta-Llama-3.1-8B-Instruct 8B 0.6057 0.0168 πŸ”—
KULLM3 10.8B 0.6033 0.0168 πŸ”—
QWEN2-7B inst 7B 0.5832 0.017 πŸ”—
Gemma-2-9b-it 9B 0.5714 0.0170 πŸ”—
Aya-23-8B 8B 0.5159 0.0172 πŸ”—
Allganize-Alpha-Instruct 8B 0.4970 0.0172 πŸ”—

As mentioned in the paper, it is possible to evaluate various models.

πŸ‡°πŸ‡·πŸ‡ΊπŸ‡ΈπŸ‡―πŸ‡΅πŸ‡¨πŸ‡³πŸ‡ͺπŸ‡Έ Code-switching

The dataset can be found on Hugging Face at: nlpai-lab/ko_commongen_v2_code_switching

This dataset contains code-switching data for the following languages:

  • Korean (korean)
  • English (english)
  • Japanese (japan)
  • Chinese (china)
  • Spanish (espanol)

(The code-switching data relies on machine translation, which may result in some inaccuracies.)

πŸ“– Citation

@inproceedings{seo2024Kocommongenv2,
    title = "KoCommonGEN v2: A Benchmark for Navigating Korean Commonsense Reasoning Challenges in Large Language Models",
    author = "Jaehyung Seo and Jaewook Lee and Chanjun Park and SeongTae Hong and Seungjun Lee and Heuiseok Lim",
    booktitle = "Findings of the Association for Computational Linguistics: ACL 2024",
    month = August,
    year = "2024",
    address = "Bangkok, Thailand",
    publisher = "Association for Computational Linguistics",
    url = "TBD",
    doi = "TBD",
    pages = "TBD"}

🚨 Warning!

This dataset contains some instances of toxic speech.

πŸ™ Acknowledgement

We sincerely appreciate the dedication of Chanjun Park, Sanghoon Kim and Sunghun Kim (Sung Kim) from Upstage AI in managing one of the benchmark datasets for the Open Ko-LLM LeaderBoard.

Downloads last month
36
Edit dataset card