--- dataset_info: features: - name: test_name dtype: string - name: question_number dtype: int64 - name: context dtype: string - name: question dtype: string - name: gold dtype: int64 - name: option#1 dtype: string - name: option#2 dtype: string - name: option#3 dtype: string - name: option#4 dtype: string - name: option#5 dtype: string - name: Category dtype: string - name: Human_Peformance dtype: float64 - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 4220807 num_examples: 936 download_size: 1076028 dataset_size: 4220807 --- # Dataset Card for "CSAT-QA" ## Dataset Summary The field of Korean Language Processing is experiencing a surge in interest, illustrated by the introduction of open-source models such as Polyglot-Ko and proprietary models like HyperClova. Yet, as the development of larger and superior language models accelerates, evaluation methods aren't keeping pace. Recognizing this gap, we at HAE-RAE are dedicated to creating tailored benchmarks for the rigorous evaluation of these models. CSAT-QA is a comprehensive collection of 936 multiple choice question answering (MCQA) questions, manually collected the College Scholastic Ability Test (CSAT), a rigorous Korean University entrance exam. The CSAT-QA is divided into two subsets: a complete version encompassing all 936 questions, and a smaller, specialized version used for targeted evaluations. The smaller subset further diversifies into six distinct categories: Writing (WR), Grammar (GR), Reading Comprehension: Science (RCS), Reading Comprehension: Social Science (RCSS), Reading Comprehension: Humanities (RCH), and Literature (LI). Moreover, the smaller subset includes the recorded accuracy of South Korean students, providing a valuable real-world performance benchmark. For a detailed explanation of how the CSAT-QA was created please check out the [accompanying blog post](https://github.com/guijinSON/hae-rae/blob/main/blog/CSAT-QA.md), and for evaluation check out [LM-Eval-Harness](https://github.com/EleutherAI/lm-evaluation-harness) on github. ## Evaluation Results | Category | Polyglot-Ko-12.8B | GPT-3.5-16k | GPT-4 | Human_Performance | |----------|----------------|-------------|-----------|-------------------| | WR | 0.09 | 9.09 | 45.45 | **54.0** | | GR | 0.00 | 20.00 | 32.00 | **45.41** | | LI | 21.62 | 35.14 | **59.46** | 54.38 | | RCH | 17.14 | 37.14 | **62.86** | 48.7 | | RCS | 10.81 | 27.03 | **64.86** | 39.93 | | RCSS | 11.9 | 30.95 | **71.43** | 44.54 | | Average | 10.26 | 26.56 | **56.01** | 47.8 | ## How to Use The CSAT-QA includes two subsets. The full version with 936 questions can be downloaded using the following code: ``` from datasets import load_dataset dataset = load_dataset("EleutherAI/CSAT-QA", "full") ``` A more condensed version, which includes human accuracy data, can be downloaded using the following code: ``` from datasets import load_dataset import pandas as pd dataset = load_dataset("EleutherAI/CSAT-QA", "GR") # Choose from either WR, GR, LI, RCH, RCS, RCSS, ``` ## Evaluate using LM-Eval-Harness To evaluate your model simply by using the LM-Eval-Harness by EleutherAI follow the steps below. 1. To install lm-eval from the github repository main branch, run: ``` git clone https://github.com/EleutherAI/lm-evaluation-harness cd lm-evaluation-harness pip install -e . ``` 2. To install additional multilingual tokenization and text segmentation packages, you must install the package with the multilingual extra: ``` pip install -e ".[multilingual]" ``` 3. Run the evaluation by: ``` python main.py \ --model hf-causal \ --model_args pretrained=EleutherAI/polyglot-ko-1.3b \ --tasks csatqa_wr,csatqa_gr,csatqa_rcs,csatqa_rcss,csatqa_rch,csatqa_li \ --device cuda:0 ``` ## License The copyright of this material belongs to the Korea Institute for Curriculum and Evaluation(한국교육과정평가원) and may be used for research purposes only. [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)