Text Generation
Transformers
PyTorch
English
gpt2a
custom_code
crumb commited on
Commit
50b5761
·
1 Parent(s): 35d7309

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -139
README.md CHANGED
@@ -8,10 +8,7 @@ language:
8
 
9
  ---
10
 
11
-
12
- ### Open LLM Leaderboard Average Score: 0.3057
13
-
14
- This is just above base gpt2 only because of truthfulqa score bringing the average up, it has a higher truthfulqa score than any base gpt2 model. It is also just under pythia-160m for average score (0.01%) and
15
 
16
  | model | avg | arc | hellaswag | mmlu | truthfulqa |
17
  | --- | --- | --- | --- | --- | --- |
@@ -23,139 +20,5 @@ This is just above base gpt2 only because of truthfulqa score bringing the avera
23
  | pythia 160m | 30.58 | 22.78 | 30.34 | 24.95 | 44.26 |
24
 
25
 
 
26
 
27
- | Task |Version| Metric |Value | |Stderr|
28
- |-------------|------:|--------|-----:|---|-----:|
29
- |arc_challenge| 0|acc |0.1741|± |0.0111|
30
- | | |acc_norm|**0.2176**|± |0.0121|
31
-
32
- | Task |Version| Metric |Value | |Stderr|
33
- |---------|------:|--------|-----:|---|-----:|
34
- |hellaswag| 0|acc |0.2698|± |0.0044|
35
- | | |acc_norm|**0.2735**|± |0.0044|
36
-
37
-
38
- | Task |Version|Metric|Value | |Stderr|
39
- |-------------|------:|------|-----:|---|-----:|
40
- |truthfulqa_mc| 1|mc1 |0.2803|± |0.0157|
41
- | | |mc2 |**0.4766**|± |0.0156|
42
-
43
-
44
- | Task |Version| Metric |Value | |Stderr|
45
- |-------------------------------------------------|------:|--------|-----:|---|-----:|
46
- |hendrycksTest-abstract_algebra | 1|acc |0.2200|± |0.0416|
47
- | | |acc_norm|0.2200|± |0.0416|
48
- |hendrycksTest-anatomy | 1|acc |0.3333|± |0.0407|
49
- | | |acc_norm|0.3333|± |0.0407|
50
- |hendrycksTest-astronomy | 1|acc |0.2237|± |0.0339|
51
- | | |acc_norm|0.2237|± |0.0339|
52
- |hendrycksTest-business_ethics | 1|acc |0.2000|± |0.0402|
53
- | | |acc_norm|0.2000|± |0.0402|
54
- |hendrycksTest-clinical_knowledge | 1|acc |0.2189|± |0.0254|
55
- | | |acc_norm|0.2189|± |0.0254|
56
- |hendrycksTest-college_biology | 1|acc |0.2083|± |0.0340|
57
- | | |acc_norm|0.2083|± |0.0340|
58
- |hendrycksTest-college_chemistry | 1|acc |0.3400|± |0.0476|
59
- | | |acc_norm|0.3400|± |0.0476|
60
- |hendrycksTest-college_computer_science | 1|acc |0.3100|± |0.0465|
61
- | | |acc_norm|0.3100|± |0.0465|
62
- |hendrycksTest-college_mathematics | 1|acc |0.3100|± |0.0465|
63
- | | |acc_norm|0.3100|± |0.0465|
64
- |hendrycksTest-college_medicine | 1|acc |0.2197|± |0.0316|
65
- | | |acc_norm|0.2197|± |0.0316|
66
- |hendrycksTest-college_physics | 1|acc |0.3431|± |0.0472|
67
- | | |acc_norm|0.3431|± |0.0472|
68
- |hendrycksTest-computer_security | 1|acc |0.2000|± |0.0402|
69
- | | |acc_norm|0.2000|± |0.0402|
70
- |hendrycksTest-conceptual_physics | 1|acc |0.2809|± |0.0294|
71
- | | |acc_norm|0.2809|± |0.0294|
72
- |hendrycksTest-econometrics | 1|acc |0.2544|± |0.0410|
73
- | | |acc_norm|0.2544|± |0.0410|
74
- |hendrycksTest-electrical_engineering | 1|acc |0.2414|± |0.0357|
75
- | | |acc_norm|0.2414|± |0.0357|
76
- |hendrycksTest-elementary_mathematics | 1|acc |0.2566|± |0.0225|
77
- | | |acc_norm|0.2566|± |0.0225|
78
- |hendrycksTest-formal_logic | 1|acc |0.1825|± |0.0346|
79
- | | |acc_norm|0.1825|± |0.0346|
80
- |hendrycksTest-global_facts | 1|acc |0.2000|± |0.0402|
81
- | | |acc_norm|0.2000|± |0.0402|
82
- |hendrycksTest-high_school_biology | 1|acc |0.3161|± |0.0265|
83
- | | |acc_norm|0.3161|± |0.0265|
84
- |hendrycksTest-high_school_chemistry | 1|acc |0.2759|± |0.0314|
85
- | | |acc_norm|0.2759|± |0.0314|
86
- |hendrycksTest-high_school_computer_science | 1|acc |0.2400|± |0.0429|
87
- | | |acc_norm|0.2400|± |0.0429|
88
- |hendrycksTest-high_school_european_history | 1|acc |0.2909|± |0.0355|
89
- | | |acc_norm|0.2909|± |0.0355|
90
- |hendrycksTest-high_school_geography | 1|acc |0.3535|± |0.0341|
91
- | | |acc_norm|0.3535|± |0.0341|
92
- |hendrycksTest-high_school_government_and_politics| 1|acc |0.2280|± |0.0303|
93
- | | |acc_norm|0.2280|± |0.0303|
94
- |hendrycksTest-high_school_macroeconomics | 1|acc |0.2051|± |0.0205|
95
- | | |acc_norm|0.2051|± |0.0205|
96
- |hendrycksTest-high_school_mathematics | 1|acc |0.2630|± |0.0268|
97
- | | |acc_norm|0.2630|± |0.0268|
98
- |hendrycksTest-high_school_microeconomics | 1|acc |0.3403|± |0.0308|
99
- | | |acc_norm|0.3403|± |0.0308|
100
- |hendrycksTest-high_school_physics | 1|acc |0.2384|± |0.0348|
101
- | | |acc_norm|0.2384|± |0.0348|
102
- |hendrycksTest-high_school_psychology | 1|acc |0.2257|± |0.0179|
103
- | | |acc_norm|0.2257|± |0.0179|
104
- |hendrycksTest-high_school_statistics | 1|acc |0.4722|± |0.0340|
105
- | | |acc_norm|0.4722|± |0.0340|
106
- |hendrycksTest-high_school_us_history | 1|acc |0.2206|± |0.0291|
107
- | | |acc_norm|0.2206|± |0.0291|
108
- |hendrycksTest-high_school_world_history | 1|acc |0.2658|± |0.0288|
109
- | | |acc_norm|0.2658|± |0.0288|
110
- |hendrycksTest-human_aging | 1|acc |0.2063|± |0.0272|
111
- | | |acc_norm|0.2063|± |0.0272|
112
- |hendrycksTest-human_sexuality | 1|acc |0.2366|± |0.0373|
113
- | | |acc_norm|0.2366|± |0.0373|
114
- |hendrycksTest-international_law | 1|acc |0.2562|± |0.0398|
115
- | | |acc_norm|0.2562|± |0.0398|
116
- |hendrycksTest-jurisprudence | 1|acc |0.2130|± |0.0396|
117
- | | |acc_norm|0.2130|± |0.0396|
118
- |hendrycksTest-logical_fallacies | 1|acc |0.2393|± |0.0335|
119
- | | |acc_norm|0.2393|± |0.0335|
120
- |hendrycksTest-machine_learning | 1|acc |0.2054|± |0.0383|
121
- | | |acc_norm|0.2054|± |0.0383|
122
- |hendrycksTest-management | 1|acc |0.1942|± |0.0392|
123
- | | |acc_norm|0.1942|± |0.0392|
124
- |hendrycksTest-marketing | 1|acc |0.1923|± |0.0258|
125
- | | |acc_norm|0.1923|± |0.0258|
126
- |hendrycksTest-medical_genetics | 1|acc |0.3000|± |0.0461|
127
- | | |acc_norm|0.3000|± |0.0461|
128
- |hendrycksTest-miscellaneous | 1|acc |0.2708|± |0.0159|
129
- | | |acc_norm|0.2708|± |0.0159|
130
- |hendrycksTest-moral_disputes | 1|acc |0.2168|± |0.0222|
131
- | | |acc_norm|0.2168|± |0.0222|
132
- |hendrycksTest-moral_scenarios | 1|acc |0.2313|± |0.0141|
133
- | | |acc_norm|0.2313|± |0.0141|
134
- |hendrycksTest-nutrition | 1|acc |0.2222|± |0.0238|
135
- | | |acc_norm|0.2222|± |0.0238|
136
- |hendrycksTest-philosophy | 1|acc |0.2315|± |0.0240|
137
- | | |acc_norm|0.2315|± |0.0240|
138
- |hendrycksTest-prehistory | 1|acc |0.2963|± |0.0254|
139
- | | |acc_norm|0.2963|± |0.0254|
140
- |hendrycksTest-professional_accounting | 1|acc |0.2589|± |0.0261|
141
- | | |acc_norm|0.2589|± |0.0261|
142
- |hendrycksTest-professional_law | 1|acc |0.2490|± |0.0110|
143
- | | |acc_norm|0.2490|± |0.0110|
144
- |hendrycksTest-professional_medicine | 1|acc |0.4375|± |0.0301|
145
- | | |acc_norm|0.4375|± |0.0301|
146
- |hendrycksTest-professional_psychology | 1|acc |0.2271|± |0.0169|
147
- | | |acc_norm|0.2271|± |0.0169|
148
- |hendrycksTest-public_relations | 1|acc |0.2455|± |0.0412|
149
- | | |acc_norm|0.2455|± |0.0412|
150
- |hendrycksTest-security_studies | 1|acc |0.2367|± |0.0272|
151
- | | |acc_norm|0.2367|± |0.0272|
152
- |hendrycksTest-sociology | 1|acc |0.2438|± |0.0304|
153
- | | |acc_norm|0.2438|± |0.0304|
154
- |hendrycksTest-us_foreign_policy | 1|acc |0.2900|± |0.0456|
155
- | | |acc_norm|0.2900|± |0.0456|
156
- |hendrycksTest-virology | 1|acc |0.1928|± |0.0307|
157
- | | |acc_norm|0.1928|± |0.0307|
158
- |hendrycksTest-world_religions | 1|acc |0.1813|± |0.0295|
159
- | | |acc_norm|0.1813|± |0.0295|
160
-
161
- average mmlu is 0.2553175438596491 ??
 
8
 
9
  ---
10
 
11
+ A modified GPT-2 model with only 25 million non-embedding params that outbenches GPT-2(124m), Pythia-70m/160m, and Cerebras-111m, it has ScaledSinusoidal position embeddings, embedding layernorm, no biases, and was trained on only 8 billion tokens of the SlimPajama dataset at home on 2xA6000.
 
 
 
12
 
13
  | model | avg | arc | hellaswag | mmlu | truthfulqa |
14
  | --- | --- | --- | --- | --- | --- |
 
20
  | pythia 160m | 30.58 | 22.78 | 30.34 | 24.95 | 44.26 |
21
 
22
 
23
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6079949388160e14e4e2e499/NzTdlxtBDp4drBRZgJiXt.png)
24