Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
csv
Languages:
Chinese
Size:
10K - 100K
License:
Fix wording
Browse files
README.md
CHANGED
@@ -22,10 +22,9 @@ size_categories:
|
|
22 |
<p align="center">
|
23 |
<img src="https://huggingface.co/datasets/ikala/tmmluplus/resolve/main/cover.png" alt="A close-up image of a neat paper note with a white background. The text 'TMMLU+' is written horizontally across the center of the note in bold, black. Join us to work in multimodal LLM : https://ikala.ai/recruit/" style="max-width: 400" width=400 />
|
24 |
</p>
|
|
|
25 |
|
26 |
-
|
27 |
-
|
28 |
-
TMMLU+ dataset is 6 times larger and contains more balanced subjects compared to the previous version, [TMMLU](https://github.com/mtkresearch/MR-Models/tree/main/TC-Eval/data/TMMLU). We included benchmark results in TMMLU+ from closed-source models and 20 open-weight Chinese large language models of parameters ranging from 1.8B to 72B. Benchmark results show Traditional Chinese variants still lag behind those trained on Simplified Chinese major models.
|
29 |
|
30 |
|
31 |
```python
|
|
|
22 |
<p align="center">
|
23 |
<img src="https://huggingface.co/datasets/ikala/tmmluplus/resolve/main/cover.png" alt="A close-up image of a neat paper note with a white background. The text 'TMMLU+' is written horizontally across the center of the note in bold, black. Join us to work in multimodal LLM : https://ikala.ai/recruit/" style="max-width: 400" width=400 />
|
24 |
</p>
|
25 |
+
We present TMMLU+, a traditional Chinese massive multitask language understanding dataset. TMMLU+ is a multiple-choice question-answering dataset featuring 66 subjects, ranging from elementary to professional level.
|
26 |
|
27 |
+
The TMMLU+ dataset is six times larger and contains more balanced subjects compared to its predecessor, [TMMLU](https://github.com/mtkresearch/MR-Models/tree/main/TC-Eval/data/TMMLU). We have included benchmark results in TMMLU+ from closed-source models and 20 open-source Chinese large language models, with parameters ranging from 1.8B to 72B. The benchmark results show that Traditional Chinese variants still lag behind those trained on major Simplified Chinese models.
|
|
|
|
|
28 |
|
29 |
|
30 |
```python
|