SUFE-AIFLM-Lab
commited on
Commit
•
dc4f251
1
Parent(s):
f91a4f1
Update README.md
Browse files
README.md
CHANGED
@@ -9,6 +9,7 @@ language:
|
|
9 |
pretty_name: FinEval
|
10 |
size_categories:
|
11 |
- 10K<n<100K
|
|
|
12 |
---
|
13 |
|
14 |
FinEval is a collection of high-quality multiple-choice questions covering various domains such as finance, economics, accounting, and certifications. It consists of 4,661 questions spanning across 34 distinct academic subjects. To ensure a comprehensive assessment of model performance, FinEval employs various methods including zero-shot, few-shot, answer-only, and chain-of-thought prompts. Evaluating state-of-the-art large language models in both Chinese and English on FinEval reveals that only GPT-4 achieves an accuracy of 60% across different prompt settings, highlighting substantial growth potential of large language models in financial domain knowledge. Our work provides a more comprehensive benchmark for evaluating financial knowledge, utilizing simulated exam data and encompassing a wide range of large language model assessments.
|
|
|
9 |
pretty_name: FinEval
|
10 |
size_categories:
|
11 |
- 10K<n<100K
|
12 |
+
viewer: false
|
13 |
---
|
14 |
|
15 |
FinEval is a collection of high-quality multiple-choice questions covering various domains such as finance, economics, accounting, and certifications. It consists of 4,661 questions spanning across 34 distinct academic subjects. To ensure a comprehensive assessment of model performance, FinEval employs various methods including zero-shot, few-shot, answer-only, and chain-of-thought prompts. Evaluating state-of-the-art large language models in both Chinese and English on FinEval reveals that only GPT-4 achieves an accuracy of 60% across different prompt settings, highlighting substantial growth potential of large language models in financial domain knowledge. Our work provides a more comprehensive benchmark for evaluating financial knowledge, utilizing simulated exam data and encompassing a wide range of large language model assessments.
|