hawei commited on
Commit
dfcc484
·
verified ·
1 Parent(s): 01b29be

Add paper link

Browse files
Files changed (1) hide show
  1. README.md +111 -107
README.md CHANGED
@@ -1,107 +1,111 @@
1
- ---
2
- license: llama3.1
3
- datasets:
4
- - survivi/Llama-3-SynE-Dataset
5
- - hfl/stem_zh_instruction
6
- - llamafactory/alpaca_zh
7
- - llamafactory/alpaca_gpt4_zh
8
- - hfl/ruozhiba_gpt4
9
- - codingsteven/Llama-3-8B-chat
10
- language:
11
- - zh
12
- base_model:
13
- - meta-llama/Llama-3.1-8B
14
- model-index:
15
- - name: Control-LLM-Llama3.1-8B-SynE-Concat16-Dlerp
16
- results:
17
- - task:
18
- type: pretraining-evaluation
19
- dataset:
20
- type: mixed
21
- name: Pretraining Evaluation Dataset
22
- metrics:
23
- - name: exact_match,strict-match (meta_pretrain)
24
- type: exact_match
25
- value: 0.48514264142803215
26
- stderr: 0.003513307445696379
27
- verified: false
28
- - name: exact_match,strict-match (meta_bbh_3shot_cot_pretrain)
29
- type: exact_match
30
- value: 0.6817693134695131
31
- stderr: 0.0057729694388110805
32
- verified: false
33
- - name: acc,none (meta_mmlu_5shot_pretrain)
34
- type: accuracy
35
- value: 0.65596068936049
36
- stderr: 0.0040090726054856874
37
- verified: false
38
- - name: exact_match,strict-match (meta_mmlu_pro_5shot_pretrain)
39
- type: exact_match
40
- value: 0.3787400265957447
41
- stderr: 0.004422383756050139
42
- verified: false
43
- - task:
44
- type: chinese-evaluation
45
- dataset:
46
- type: mixed
47
- name: Chinese Evaluation Dataset
48
- metrics:
49
- - name: exact_match,strict-match (zh_pretrain_multishot)
50
- type: exact_match
51
- value: 0.44848391089108913
52
- stderr: 0.004255614019851072
53
- verified: false
54
- - name: acc,none (ceval-valid)
55
- type: accuracy
56
- value: 0.5698365527488856
57
- stderr: 0.012893833892221353
58
- verified: false
59
- - name: exact_match,strict-match (ceval-valid-pretrain-cot_zh)
60
- type: exact_match
61
- value: 0.4472511144130758
62
- stderr: 0.013203606600472227
63
- verified: false
64
- - name: acc,none (cmmlu)
65
- type: accuracy
66
- value: 0.5602659298912105
67
- stderr: 0.0044928840587441605
68
- verified: false
69
- - name: exact_match,strict-match (cmmlu_pretrain_cot_zh)
70
- type: exact_match
71
- value: 0.4486271801070627
72
- stderr: 0.00449553418468653
73
- verified: false
74
- ---
75
- # Control-LLM-Llama3.1-8B-SynE-Concat16-Dlerp
76
- This is a fine-tuned model of Llama-3.1-8B for muliligual-Chinese tasks on SynE dataset by Control LLM-Concat16-Dlerp.
77
-
78
- ## Evaluation Results
79
- Here is an overview of the evaluation results and findings:
80
-
81
- ### Benchmark Results Table
82
-
83
- The table below summarizes evaluation results across Chinese tasks and original capabilities.
84
-
85
- | **Model** | **CEval** | **CEvalC** | **CMMLU** | **CMMLUC** | **C-Avg** | **BBH** | **MLU** | **MLUP** | **O-Avg** | **Overall** |
86
- |--------------------|-----------|------------|-----------|------------|-----------|---------|---------|----------|-----------|-------------|
87
- | Llama3.1-8B | 48.3 | 12.8 | 51.1 | 14.1 | 13.9 | 65.2 | 65.4 | 35.5 | 45.9 | 29.9 |
88
- | Llama-3-SynE | 57.7 | 22.3 | 57.1 | 22.8 | 22.8 | 61.9 | 64.0 | 32.6 | 42.9 | 32.9 |
89
- | Full Param Tune | 59.0 | 40.2 | **60.2** | 44.3 | 43.8 | 64.8 | 64.9 | 35.0 | 45.4 | 44.6 |
90
- | Stack Expansion | 56.0 | 32.7 | 55.2 | 33.4 | 33.3 | 62.3 | 65.6 | 35.3 | 44.8 | 39.1 |
91
- | Concat-Lerp | 57.1 | 34.8 | 57.0 | 37.4 | 37.1 | 64.4 | 64.6 | 35.8 | 45.9 | 41.5 |
92
- | Hybrid Expansion | **58.9** | 44.7 | 57.9 | 44.3 | 44.4 | 65.1 | **65.7**| 36.9 | 46.8 | 45.6 |
93
- | **Control LLM*** | 57.0 | **44.7** | 56.0 | **44.9** | **44.8** | **68.2**| 65.6 | **37.9** | **48.5** | **46.7** |
94
-
95
- ---
96
-
97
- ### Explanation:
98
- - **CEval**: Chinese Evaluation
99
- - **CEvalC**: Chinese Evaluation (CoT - Chain of Thought)
100
- - **CMMLU**: Chinese MMLU
101
- - **CMMLUC**: Chinese MMLU (CoT)
102
- - **C-Avg**: Chinese - Size Weighted Average across CEval, CEvalC, CMMLU, and CMMLUC
103
- - **BBH**: BigBench Hard
104
- - **MLU**: MMLU (Massive Multitask Language Understanding)
105
- - **MLUP**: MMLU Pro
106
- - **O-Avg**: Original Capability - Size Weighted Average across BBH, MLU, and MLUP
107
- - **Overall**: Combined average across all tasks
 
 
 
 
 
1
+ ---
2
+ license: llama3.1
3
+ datasets:
4
+ - survivi/Llama-3-SynE-Dataset
5
+ - hfl/stem_zh_instruction
6
+ - llamafactory/alpaca_zh
7
+ - llamafactory/alpaca_gpt4_zh
8
+ - hfl/ruozhiba_gpt4
9
+ - codingsteven/Llama-3-8B-chat
10
+ language:
11
+ - zh
12
+ base_model:
13
+ - meta-llama/Llama-3.1-8B
14
+ model-index:
15
+ - name: Control-LLM-Llama3.1-8B-SynE-Concat16-Dlerp
16
+ results:
17
+ - task:
18
+ type: pretraining-evaluation
19
+ dataset:
20
+ type: mixed
21
+ name: Pretraining Evaluation Dataset
22
+ metrics:
23
+ - name: exact_match,strict-match (meta_pretrain)
24
+ type: exact_match
25
+ value: 0.48514264142803215
26
+ stderr: 0.003513307445696379
27
+ verified: false
28
+ - name: exact_match,strict-match (meta_bbh_3shot_cot_pretrain)
29
+ type: exact_match
30
+ value: 0.6817693134695131
31
+ stderr: 0.0057729694388110805
32
+ verified: false
33
+ - name: acc,none (meta_mmlu_5shot_pretrain)
34
+ type: accuracy
35
+ value: 0.65596068936049
36
+ stderr: 0.0040090726054856874
37
+ verified: false
38
+ - name: exact_match,strict-match (meta_mmlu_pro_5shot_pretrain)
39
+ type: exact_match
40
+ value: 0.3787400265957447
41
+ stderr: 0.004422383756050139
42
+ verified: false
43
+ - task:
44
+ type: chinese-evaluation
45
+ dataset:
46
+ type: mixed
47
+ name: Chinese Evaluation Dataset
48
+ metrics:
49
+ - name: exact_match,strict-match (zh_pretrain_multishot)
50
+ type: exact_match
51
+ value: 0.44848391089108913
52
+ stderr: 0.004255614019851072
53
+ verified: false
54
+ - name: acc,none (ceval-valid)
55
+ type: accuracy
56
+ value: 0.5698365527488856
57
+ stderr: 0.012893833892221353
58
+ verified: false
59
+ - name: exact_match,strict-match (ceval-valid-pretrain-cot_zh)
60
+ type: exact_match
61
+ value: 0.4472511144130758
62
+ stderr: 0.013203606600472227
63
+ verified: false
64
+ - name: acc,none (cmmlu)
65
+ type: accuracy
66
+ value: 0.5602659298912105
67
+ stderr: 0.0044928840587441605
68
+ verified: false
69
+ - name: exact_match,strict-match (cmmlu_pretrain_cot_zh)
70
+ type: exact_match
71
+ value: 0.4486271801070627
72
+ stderr: 0.00449553418468653
73
+ verified: false
74
+ ---
75
+
76
+ # Control-LLM-Llama3.1-8B-SynE-Concat16-Dlerp
77
+ This is a fine-tuned model of Llama-3.1-8B for muliligual-Chinese tasks on SynE dataset by Control LLM-Concat16-Dlerp.
78
+
79
+ ## Linked Paper
80
+ This model is associated with the paper: [Control-LLM](https://arxiv.org/abs/2501.10979).
81
+
82
+ ## Evaluation Results
83
+ Here is an overview of the evaluation results and findings:
84
+
85
+ ### Benchmark Results Table
86
+
87
+ The table below summarizes evaluation results across Chinese tasks and original capabilities.
88
+
89
+ | **Model** | **CEval** | **CEvalC** | **CMMLU** | **CMMLUC** | **C-Avg** | **BBH** | **MLU** | **MLUP** | **O-Avg** | **Overall** |
90
+ |--------------------|-----------|------------|-----------|------------|-----------|---------|---------|----------|-----------|-------------|
91
+ | Llama3.1-8B | 48.3 | 12.8 | 51.1 | 14.1 | 13.9 | 65.2 | 65.4 | 35.5 | 45.9 | 29.9 |
92
+ | Llama-3-SynE | 57.7 | 22.3 | 57.1 | 22.8 | 22.8 | 61.9 | 64.0 | 32.6 | 42.9 | 32.9 |
93
+ | Full Param Tune | 59.0 | 40.2 | **60.2** | 44.3 | 43.8 | 64.8 | 64.9 | 35.0 | 45.4 | 44.6 |
94
+ | Stack Expansion | 56.0 | 32.7 | 55.2 | 33.4 | 33.3 | 62.3 | 65.6 | 35.3 | 44.8 | 39.1 |
95
+ | Concat-Lerp | 57.1 | 34.8 | 57.0 | 37.4 | 37.1 | 64.4 | 64.6 | 35.8 | 45.9 | 41.5 |
96
+ | Hybrid Expansion | **58.9** | 44.7 | 57.9 | 44.3 | 44.4 | 65.1 | **65.7**| 36.9 | 46.8 | 45.6 |
97
+ | **Control LLM*** | 57.0 | **44.7** | 56.0 | **44.9** | **44.8** | **68.2**| 65.6 | **37.9** | **48.5** | **46.7** |
98
+
99
+ ---
100
+
101
+ ### Explanation:
102
+ - **CEval**: Chinese Evaluation
103
+ - **CEvalC**: Chinese Evaluation (CoT - Chain of Thought)
104
+ - **CMMLU**: Chinese MMLU
105
+ - **CMMLUC**: Chinese MMLU (CoT)
106
+ - **C-Avg**: Chinese - Size Weighted Average across CEval, CEvalC, CMMLU, and CMMLUC
107
+ - **BBH**: BigBench Hard
108
+ - **MLU**: MMLU (Massive Multitask Language Understanding)
109
+ - **MLUP**: MMLU Pro
110
+ - **O-Avg**: Original Capability - Size Weighted Average across BBH, MLU, and MLUP
111
+ - **Overall**: Combined average across all tasks