Update README.md
Browse files
README.md
CHANGED
@@ -218,16 +218,30 @@ dataset_info:
|
|
218 |
|
219 |
# KorMedMCQA : Multi-Choice Question Answering Benchmark for Korean Healthcare Professional Licensing Examinations
|
220 |
|
221 |
-
We
|
222 |
-
|
223 |
-
|
224 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
225 |
|
226 |
Paper : https://arxiv.org/abs/2403.01469
|
227 |
|
228 |
## Notice
|
229 |
|
230 |
-
|
231 |
|
232 |
1. **Dentist Exam**: Incorporated exam questions from 2021 to 2024.
|
233 |
2. **Updated Test Sets**: Added the 2024 exam questions for the doctor, nurse, and pharmacist test sets.
|
|
|
218 |
|
219 |
# KorMedMCQA : Multi-Choice Question Answering Benchmark for Korean Healthcare Professional Licensing Examinations
|
220 |
|
221 |
+
We present KorMedMCQA, the first Korean Medical Multiple-Choice Question
|
222 |
+
Answering benchmark, derived from professional healthcare licensing
|
223 |
+
examinations conducted in Korea between 2012 and 2024. The dataset contains
|
224 |
+
7,469 questions from examinations for doctor, nurse, pharmacist, and dentist,
|
225 |
+
covering a wide range of medical disciplines. We evaluate the performance of 59
|
226 |
+
large language models, spanning proprietary and open-source models,
|
227 |
+
multilingual and Korean-specialized models, and those fine-tuned for clinical
|
228 |
+
applications. Our results show that applying Chain of Thought (CoT) reasoning
|
229 |
+
can enhance the model performance by up to 4.5% compared to direct answering
|
230 |
+
approaches. We also investigate whether MedQA, one of the most widely used
|
231 |
+
medical benchmarks derived from the U.S. Medical Licensing Examination, can
|
232 |
+
serve as a reliable proxy for evaluating model performance in other regions-in
|
233 |
+
this case, Korea. Our correlation analysis between model scores on KorMedMCQA
|
234 |
+
and MedQA reveals that these two benchmarks align no better than benchmarks
|
235 |
+
from entirely different domains (e.g., MedQA and MMLU-Pro). This finding
|
236 |
+
underscores the substantial linguistic and clinical differences between Korean
|
237 |
+
and U.S. medical contexts, reinforcing the need for region-specific medical QA
|
238 |
+
benchmarks.
|
239 |
|
240 |
Paper : https://arxiv.org/abs/2403.01469
|
241 |
|
242 |
## Notice
|
243 |
|
244 |
+
We have made the following updates to the KorMedMCQA dataset:
|
245 |
|
246 |
1. **Dentist Exam**: Incorporated exam questions from 2021 to 2024.
|
247 |
2. **Updated Test Sets**: Added the 2024 exam questions for the doctor, nurse, and pharmacist test sets.
|