Datasets:
cordercorder
commited on
Commit
•
8667933
1
Parent(s):
a1645d5
add citation
Browse files
README.md
CHANGED
@@ -14,3 +14,15 @@ arxiv:
|
|
14 |
|
15 |
|
16 |
M3KE, or Massive Multi-Level Multi-Subject Knowledge Evaluation, is a benchmark developed to assess the knowledge acquired by large Chinese language models by evaluating their multitask accuracy in both zero- and few-shot settings. The benchmark comprises 20,477 questions spanning 71 tasks. For further information about M3KE, please consult our [paper](https://arxiv.org/abs/2305.10263) or visit our [GitHub](https://github.com/tjunlp-lab/M3KE) page.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
|
15 |
|
16 |
M3KE, or Massive Multi-Level Multi-Subject Knowledge Evaluation, is a benchmark developed to assess the knowledge acquired by large Chinese language models by evaluating their multitask accuracy in both zero- and few-shot settings. The benchmark comprises 20,477 questions spanning 71 tasks. For further information about M3KE, please consult our [paper](https://arxiv.org/abs/2305.10263) or visit our [GitHub](https://github.com/tjunlp-lab/M3KE) page.
|
17 |
+
|
18 |
+
|
19 |
+
```
|
20 |
+
@misc{liu2023m3ke,
|
21 |
+
title={M3KE: A Massive Multi-Level Multi-Subject Knowledge Evaluation Benchmark for Chinese Large Language Models},
|
22 |
+
author={Chuang Liu and Renren Jin and Yuqi Ren and Linhao Yu and Tianyu Dong and Xiaohan Peng and Shuting Zhang and Jianxiang Peng and Peiyi Zhang and Qingqing Lyu and Xiaowen Su and Qun Liu and Deyi Xiong},
|
23 |
+
year={2023},
|
24 |
+
eprint={2305.10263},
|
25 |
+
archivePrefix={arXiv},
|
26 |
+
primaryClass={cs.CL}
|
27 |
+
}
|
28 |
+
```
|