T145 commited on
Commit
3d3fbc9
1 Parent(s): 4ed3aac

Adding Evaluation Results

Browse files

This is an automated PR created with [this space](https://huggingface.co/spaces/T145/open-llm-leaderboard-results-to-modelcard)!

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

Please report any issues here: https://huggingface.co/spaces/T145/open-llm-leaderboard-results-to-modelcard/discussions

Files changed (1) hide show
  1. README.md +114 -2
README.md CHANGED
@@ -1,11 +1,110 @@
1
  ---
2
  library_name: transformers
3
  base_model:
4
- - nbeerbower/llama-3-bophades-v3-8B
5
  datasets:
6
  - jondurbin/gutenberg-dpo-v0.1
7
  license: other
8
  license_name: llama3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  ---
10
 
11
  # llama-3-gutenberg-8B
@@ -117,4 +216,17 @@ dpo_trainer = DPOTrainer(
117
  max_length=1536,
118
  force_use_ref_model=True
119
  )
120
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  library_name: transformers
3
  base_model:
4
+ - nbeerbower/llama-3-bophades-v3-8B
5
  datasets:
6
  - jondurbin/gutenberg-dpo-v0.1
7
  license: other
8
  license_name: llama3
9
+ model-index:
10
+ - name: llama-3-gutenberg-8B
11
+ results:
12
+ - task:
13
+ type: text-generation
14
+ name: Text Generation
15
+ dataset:
16
+ name: IFEval (0-Shot)
17
+ type: wis-k/instruction-following-eval
18
+ split: train
19
+ args:
20
+ num_few_shot: 0
21
+ metrics:
22
+ - type: inst_level_strict_acc and prompt_level_strict_acc
23
+ value: 43.72
24
+ name: averaged accuracy
25
+ source:
26
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=nbeerbower%2Fllama-3-gutenberg-8B
27
+ name: Open LLM Leaderboard
28
+ - task:
29
+ type: text-generation
30
+ name: Text Generation
31
+ dataset:
32
+ name: BBH (3-Shot)
33
+ type: SaylorTwift/bbh
34
+ split: test
35
+ args:
36
+ num_few_shot: 3
37
+ metrics:
38
+ - type: acc_norm
39
+ value: 27.96
40
+ name: normalized accuracy
41
+ source:
42
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=nbeerbower%2Fllama-3-gutenberg-8B
43
+ name: Open LLM Leaderboard
44
+ - task:
45
+ type: text-generation
46
+ name: Text Generation
47
+ dataset:
48
+ name: MATH Lvl 5 (4-Shot)
49
+ type: lighteval/MATH-Hard
50
+ split: test
51
+ args:
52
+ num_few_shot: 4
53
+ metrics:
54
+ - type: exact_match
55
+ value: 7.78
56
+ name: exact match
57
+ source:
58
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=nbeerbower%2Fllama-3-gutenberg-8B
59
+ name: Open LLM Leaderboard
60
+ - task:
61
+ type: text-generation
62
+ name: Text Generation
63
+ dataset:
64
+ name: GPQA (0-shot)
65
+ type: Idavidrein/gpqa
66
+ split: train
67
+ args:
68
+ num_few_shot: 0
69
+ metrics:
70
+ - type: acc_norm
71
+ value: 6.82
72
+ name: acc_norm
73
+ source:
74
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=nbeerbower%2Fllama-3-gutenberg-8B
75
+ name: Open LLM Leaderboard
76
+ - task:
77
+ type: text-generation
78
+ name: Text Generation
79
+ dataset:
80
+ name: MuSR (0-shot)
81
+ type: TAUR-Lab/MuSR
82
+ args:
83
+ num_few_shot: 0
84
+ metrics:
85
+ - type: acc_norm
86
+ value: 10.05
87
+ name: acc_norm
88
+ source:
89
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=nbeerbower%2Fllama-3-gutenberg-8B
90
+ name: Open LLM Leaderboard
91
+ - task:
92
+ type: text-generation
93
+ name: Text Generation
94
+ dataset:
95
+ name: MMLU-PRO (5-shot)
96
+ type: TIGER-Lab/MMLU-Pro
97
+ config: main
98
+ split: test
99
+ args:
100
+ num_few_shot: 5
101
+ metrics:
102
+ - type: acc
103
+ value: 31.45
104
+ name: accuracy
105
+ source:
106
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=nbeerbower%2Fllama-3-gutenberg-8B
107
+ name: Open LLM Leaderboard
108
  ---
109
 
110
  # llama-3-gutenberg-8B
 
216
  max_length=1536,
217
  force_use_ref_model=True
218
  )
219
+ ```# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
220
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/nbeerbower__llama-3-gutenberg-8B-details)!
221
+ Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=nbeerbower%2Fllama-3-gutenberg-8B&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!
222
+
223
+ | Metric |Value (%)|
224
+ |-------------------|--------:|
225
+ |**Average** | 21.30|
226
+ |IFEval (0-Shot) | 43.72|
227
+ |BBH (3-Shot) | 27.96|
228
+ |MATH Lvl 5 (4-Shot)| 7.78|
229
+ |GPQA (0-shot) | 6.82|
230
+ |MuSR (0-shot) | 10.05|
231
+ |MMLU-PRO (5-shot) | 31.45|
232
+