SUFE-AIFLM-Lab
commited on
Commit
•
f684d2f
1
Parent(s):
0e52f9c
Update README.md
Browse files
README.md
CHANGED
@@ -8,7 +8,7 @@ language:
|
|
8 |
- zh
|
9 |
pretty_name: FinEval
|
10 |
size_categories:
|
11 |
-
-
|
12 |
viewer: false
|
13 |
---
|
14 |
<p><h1> The FinEval Dataset! </h1></p>
|
@@ -21,10 +21,48 @@ FinEval is a collection of high-quality multiple-choice questions covering vario
|
|
21 |
|
22 |
Each subject consists of three splits: dev, val, and test. The dev set per subject consists of five exemplars with explanations for few-shot evaluation. The val set is intended to be used for hyperparameter tuning. And the test set is for model evaluation. Labels on the test split are not released, users are required to submit their results to automatically obtain test accuracy.
|
23 |
|
24 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
```python
|
26 |
from datasets import load_dataset
|
27 |
dataset=load_dataset(r"SUFE-AIFLM-Lab/FinEval",name="finance")
|
28 |
|
29 |
-
```
|
30 |
-
|
|
|
8 |
- zh
|
9 |
pretty_name: FinEval
|
10 |
size_categories:
|
11 |
+
- 1K<n<10K
|
12 |
viewer: false
|
13 |
---
|
14 |
<p><h1> The FinEval Dataset! </h1></p>
|
|
|
21 |
|
22 |
Each subject consists of three splits: dev, val, and test. The dev set per subject consists of five exemplars with explanations for few-shot evaluation. The val set is intended to be used for hyperparameter tuning. And the test set is for model evaluation. Labels on the test split are not released, users are required to submit their results to automatically obtain test accuracy.
|
23 |
|
24 |
+
# Language
|
25 |
+
|
26 |
+
The language of the data is Chinese.
|
27 |
+
|
28 |
+
# Performance Leaderboard
|
29 |
+
We divide the evaluation into Answer Only and Chain of Thought. For examples of prompts for both methods, please refer to zero-shot for Answer Only, few-shot for Answer Only, and Chain of Thought.
|
30 |
+
|
31 |
+
Below is the average accuracy(%) on the test split. We report the average accuracy over the subjects within each category. "Average" column indicates the average accuracy over all the subjects. Notably, we only report the results from each model under the best setting, which is determined by the highest average accuracy achieved among four settings (i.e., zero- and few-shot learning with and without CoT):
|
32 |
+
|
33 |
+
| Model | Size | Finance | Economy | Accounting | Certificate | Average |
|
34 |
+
|------------------------|---------|:-------:|:-------:|:----------:|:-----------:|:-------:|
|
35 |
+
| GPT-4 | unknown | 71.0 | 74.5 | 59.3 | 70.4 | 68.6 |
|
36 |
+
| ChatGPT | 175B | 59.3 | 61.6 | 45.2 | 55.1 | 55.0 |
|
37 |
+
| Qwen-7B | 7B | 54.5 | 54.4 | 50.3 | 55.8 | 53.8 |
|
38 |
+
| Qwen-Chat-7B | 7B | 51.5 | 52.1 | 44.5 | 53.6 | 50.5 |
|
39 |
+
| Baichuan-13B-Base | 13B | 52.6 | 50.2 | 43.4 | 53.5 | 50.1 |
|
40 |
+
| Baichuan-13B-Chat | 13B | 51.6 | 51.1 | 41.7 | 52.8 | 49.4 |
|
41 |
+
| ChatGLM2-6B | 6B | 46.5 | 46.4 | 44.5 | 51.5 | 47.4 |
|
42 |
+
| InternLM-7B | 7B | 49.0 | 49.2 | 40.5 | 49.4 | 47.1 |
|
43 |
+
| InternLM-Chat-7B | 7B | 48.4 | 49.1 | 40.8 | 49.5 | 47.0 |
|
44 |
+
| LLaMA-2-Chat-70B | 70B | 47.1 | 46.7 | 41.5 | 45.7 | 45.2 |
|
45 |
+
| Falcon-40B | 40B | 45.4 | 43.2 | 35.8 | 44.8 | 42.4 |
|
46 |
+
| Baichuan-7B | 7B | 44.9 | 41.5 | 34.9 | 45.6 | 42.0 |
|
47 |
+
| LLaMA-2-Chat-13B | 13B | 41.6 | 38.4 | 34.1 | 42.1 | 39.3 |
|
48 |
+
| Ziya-LLaMA-13B-v1 | 13B | 43.3 | 36.9 | 34.3 | 41.2 | 39.3 |
|
49 |
+
| Bloomz-7b1-mt | 7B | 41.4 | 42.1 | 32.5 | 39.7 | 38.8 |
|
50 |
+
| LLaMA-2-13B | 13B | 39.5 | 38.6 | 31.6 | 39.6 | 37.4 |
|
51 |
+
| ChatGLM-6B | 6B | 38.8 | 36.2 | 33.8 | 39.1 | 37.2 |
|
52 |
+
| Chinese-Llama-2-7B | 7B | 37.8 | 37.8 | 31.4 | 36.7 | 35.9 |
|
53 |
+
| Chinese-Alpaca-Plus-7B | 7B | 30.5 | 33.4 | 32.7 | 38.5 | 34.0 |
|
54 |
+
| moss-moon-003-sft | 16B | 35.6 | 34.3 | 28.7 | 35.6 | 33.7 |
|
55 |
+
| LLaMA-2-Chat-7B | 7B | 35.6 | 31.8 | 31.9 | 34.0 | 33.5 |
|
56 |
+
| LLaMA-2-7B | 7B | 34.9 | 36.4 | 31.4 | 31.6 | 33.4 |
|
57 |
+
| AquilaChat-7B | 7B | 34.2 | 31.3 | 29.8 | 36.2 | 33.1 |
|
58 |
+
| moss-moon-003-base | 16B | 32.2 | 33.1 | 29.2 | 30.7 | 31.2 |
|
59 |
+
| Aquila-7B | 7B | 27.1 | 31.6 | 32.4 | 33.6 | 31.2 |
|
60 |
+
| LLaMA-13B | 13B | 33.1 | 29.7 | 27.2 | 33.6 | 31.1 |
|
61 |
+
| Falcon-7B | 7B | 28.5 | 28.2 | 27.5 | 27.4 | 27.9 |
|
62 |
+
|
63 |
+
# Load the data
|
64 |
```python
|
65 |
from datasets import load_dataset
|
66 |
dataset=load_dataset(r"SUFE-AIFLM-Lab/FinEval",name="finance")
|
67 |
|
68 |
+
```
|
|