Adding Evaluation Results

#3
by Pawkow - opened
Files changed (1) hide show
  1. README.md +114 -1
README.md CHANGED
@@ -16,6 +16,105 @@ language:
16
  - en
17
  pipeline_tag: text-generation
18
  new_version: Pinkstack/Superthoughts-lite-v1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  ---
20
  ![superthoughtslight.png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/2LuPB_ZPCGni3-PyCkL0-.png)
21
  # Information
@@ -54,4 +153,18 @@ Generated inside the android application, Pocketpal via GGUF Q8, using the model
54
  This smollm2 model was trained with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
55
  # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
56
  Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/Pinkstack__Superthoughts-lite-1.8B-experimental-o1-details)!
57
- Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=Pinkstack%2FSuperthoughts-lite-1.8B-experimental-o1&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  - en
17
  pipeline_tag: text-generation
18
  new_version: Pinkstack/Superthoughts-lite-v1
19
+ model-index:
20
+ - name: Superthoughts-lite-1.8B-experimental-o1
21
+ results:
22
+ - task:
23
+ type: text-generation
24
+ name: Text Generation
25
+ dataset:
26
+ name: IFEval (0-Shot)
27
+ type: wis-k/instruction-following-eval
28
+ split: train
29
+ args:
30
+ num_few_shot: 0
31
+ metrics:
32
+ - type: inst_level_strict_acc and prompt_level_strict_acc
33
+ value: 3.75
34
+ name: averaged accuracy
35
+ source:
36
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Pinkstack%2FSuperthoughts-lite-1.8B-experimental-o1
37
+ name: Open LLM Leaderboard
38
+ - task:
39
+ type: text-generation
40
+ name: Text Generation
41
+ dataset:
42
+ name: BBH (3-Shot)
43
+ type: SaylorTwift/bbh
44
+ split: test
45
+ args:
46
+ num_few_shot: 3
47
+ metrics:
48
+ - type: acc_norm
49
+ value: 9.13
50
+ name: normalized accuracy
51
+ source:
52
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Pinkstack%2FSuperthoughts-lite-1.8B-experimental-o1
53
+ name: Open LLM Leaderboard
54
+ - task:
55
+ type: text-generation
56
+ name: Text Generation
57
+ dataset:
58
+ name: MATH Lvl 5 (4-Shot)
59
+ type: lighteval/MATH-Hard
60
+ split: test
61
+ args:
62
+ num_few_shot: 4
63
+ metrics:
64
+ - type: exact_match
65
+ value: 3.17
66
+ name: exact match
67
+ source:
68
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Pinkstack%2FSuperthoughts-lite-1.8B-experimental-o1
69
+ name: Open LLM Leaderboard
70
+ - task:
71
+ type: text-generation
72
+ name: Text Generation
73
+ dataset:
74
+ name: GPQA (0-shot)
75
+ type: Idavidrein/gpqa
76
+ split: train
77
+ args:
78
+ num_few_shot: 0
79
+ metrics:
80
+ - type: acc_norm
81
+ value: 3.36
82
+ name: acc_norm
83
+ source:
84
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Pinkstack%2FSuperthoughts-lite-1.8B-experimental-o1
85
+ name: Open LLM Leaderboard
86
+ - task:
87
+ type: text-generation
88
+ name: Text Generation
89
+ dataset:
90
+ name: MuSR (0-shot)
91
+ type: TAUR-Lab/MuSR
92
+ args:
93
+ num_few_shot: 0
94
+ metrics:
95
+ - type: acc_norm
96
+ value: 1.76
97
+ name: acc_norm
98
+ source:
99
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Pinkstack%2FSuperthoughts-lite-1.8B-experimental-o1
100
+ name: Open LLM Leaderboard
101
+ - task:
102
+ type: text-generation
103
+ name: Text Generation
104
+ dataset:
105
+ name: MMLU-PRO (5-shot)
106
+ type: TIGER-Lab/MMLU-Pro
107
+ config: main
108
+ split: test
109
+ args:
110
+ num_few_shot: 5
111
+ metrics:
112
+ - type: acc
113
+ value: 9.45
114
+ name: accuracy
115
+ source:
116
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Pinkstack%2FSuperthoughts-lite-1.8B-experimental-o1
117
+ name: Open LLM Leaderboard
118
  ---
119
  ![superthoughtslight.png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/2LuPB_ZPCGni3-PyCkL0-.png)
120
  # Information
 
153
  This smollm2 model was trained with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
154
  # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
155
  Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/Pinkstack__Superthoughts-lite-1.8B-experimental-o1-details)!
156
+ Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=Pinkstack%2FSuperthoughts-lite-1.8B-experimental-o1&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!
157
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
158
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/Pinkstack__Superthoughts-lite-1.8B-experimental-o1-details)!
159
+ Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=Pinkstack%2FSuperthoughts-lite-1.8B-experimental-o1&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!
160
+
161
+ | Metric |Value (%)|
162
+ |-------------------|--------:|
163
+ |**Average** | 5.10|
164
+ |IFEval (0-Shot) | 3.75|
165
+ |BBH (3-Shot) | 9.13|
166
+ |MATH Lvl 5 (4-Shot)| 3.17|
167
+ |GPQA (0-shot) | 3.36|
168
+ |MuSR (0-shot) | 1.76|
169
+ |MMLU-PRO (5-shot) | 9.45|
170
+