Adding Evaluation Results

#1
by DreadPoor - opened
Files changed (1) hide show
  1. README.md +114 -1
README.md CHANGED
@@ -7,7 +7,105 @@ library_name: transformers
7
  tags:
8
  - mergekit
9
  - merge
10
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
  # merge
13
 
@@ -45,3 +143,18 @@ normalize: true
45
  int8_mask: true
46
  dtype: bfloat16
47
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  tags:
8
  - mergekit
9
  - merge
10
+ model-index:
11
+ - name: H_the_eighth-8B-LINEAR
12
+ results:
13
+ - task:
14
+ type: text-generation
15
+ name: Text Generation
16
+ dataset:
17
+ name: IFEval (0-Shot)
18
+ type: wis-k/instruction-following-eval
19
+ split: train
20
+ args:
21
+ num_few_shot: 0
22
+ metrics:
23
+ - type: inst_level_strict_acc and prompt_level_strict_acc
24
+ value: 74.69
25
+ name: averaged accuracy
26
+ source:
27
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=DreadPoor%2FH_the_eighth-8B-LINEAR
28
+ name: Open LLM Leaderboard
29
+ - task:
30
+ type: text-generation
31
+ name: Text Generation
32
+ dataset:
33
+ name: BBH (3-Shot)
34
+ type: SaylorTwift/bbh
35
+ split: test
36
+ args:
37
+ num_few_shot: 3
38
+ metrics:
39
+ - type: acc_norm
40
+ value: 34.15
41
+ name: normalized accuracy
42
+ source:
43
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=DreadPoor%2FH_the_eighth-8B-LINEAR
44
+ name: Open LLM Leaderboard
45
+ - task:
46
+ type: text-generation
47
+ name: Text Generation
48
+ dataset:
49
+ name: MATH Lvl 5 (4-Shot)
50
+ type: lighteval/MATH-Hard
51
+ split: test
52
+ args:
53
+ num_few_shot: 4
54
+ metrics:
55
+ - type: exact_match
56
+ value: 17.75
57
+ name: exact match
58
+ source:
59
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=DreadPoor%2FH_the_eighth-8B-LINEAR
60
+ name: Open LLM Leaderboard
61
+ - task:
62
+ type: text-generation
63
+ name: Text Generation
64
+ dataset:
65
+ name: GPQA (0-shot)
66
+ type: Idavidrein/gpqa
67
+ split: train
68
+ args:
69
+ num_few_shot: 0
70
+ metrics:
71
+ - type: acc_norm
72
+ value: 10.4
73
+ name: acc_norm
74
+ source:
75
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=DreadPoor%2FH_the_eighth-8B-LINEAR
76
+ name: Open LLM Leaderboard
77
+ - task:
78
+ type: text-generation
79
+ name: Text Generation
80
+ dataset:
81
+ name: MuSR (0-shot)
82
+ type: TAUR-Lab/MuSR
83
+ args:
84
+ num_few_shot: 0
85
+ metrics:
86
+ - type: acc_norm
87
+ value: 12.76
88
+ name: acc_norm
89
+ source:
90
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=DreadPoor%2FH_the_eighth-8B-LINEAR
91
+ name: Open LLM Leaderboard
92
+ - task:
93
+ type: text-generation
94
+ name: Text Generation
95
+ dataset:
96
+ name: MMLU-PRO (5-shot)
97
+ type: TIGER-Lab/MMLU-Pro
98
+ config: main
99
+ split: test
100
+ args:
101
+ num_few_shot: 5
102
+ metrics:
103
+ - type: acc
104
+ value: 31.38
105
+ name: accuracy
106
+ source:
107
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=DreadPoor%2FH_the_eighth-8B-LINEAR
108
+ name: Open LLM Leaderboard
109
  ---
110
  # merge
111
 
 
143
  int8_mask: true
144
  dtype: bfloat16
145
  ```
146
+
147
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
148
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/DreadPoor__H_the_eighth-8B-LINEAR-details)!
149
+ Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=DreadPoor%2FH_the_eighth-8B-LINEAR&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!
150
+
151
+ | Metric |Value (%)|
152
+ |-------------------|--------:|
153
+ |**Average** | 30.19|
154
+ |IFEval (0-Shot) | 74.69|
155
+ |BBH (3-Shot) | 34.15|
156
+ |MATH Lvl 5 (4-Shot)| 17.75|
157
+ |GPQA (0-shot) | 10.40|
158
+ |MuSR (0-shot) | 12.76|
159
+ |MMLU-PRO (5-shot) | 31.38|
160
+