Commit
073980f
1 Parent(s): 67c0ac2

Adding Evaluation Results (#3)

Browse files

- Adding Evaluation Results (5a731ab3d9d6af638aa56692bb8367f0fb95685f)


Co-authored-by: Open LLM Leaderboard PR Bot <leaderboard-pr-bot@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +117 -1
README.md CHANGED
@@ -2,6 +2,109 @@
2
  license: mit
3
  datasets:
4
  - nbertagnolli/counsel-chat
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  ---
6
  # MelloGPT
7
  <p align="center">
@@ -40,4 +143,17 @@ This project was inspired by the project(s) listed below:
40
  ## Credits
41
  This is my first attempt at fine-tuning a large language model. It wouldn't be possible without [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) and [Runpod](runpod.io). The axolotl config file can be found [here](https://github.com/steve-cse/mello/blob/master/mello.yml).
42
 
43
- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: mit
3
  datasets:
4
  - nbertagnolli/counsel-chat
5
+ model-index:
6
+ - name: MelloGPT
7
+ results:
8
+ - task:
9
+ type: text-generation
10
+ name: Text Generation
11
+ dataset:
12
+ name: AI2 Reasoning Challenge (25-Shot)
13
+ type: ai2_arc
14
+ config: ARC-Challenge
15
+ split: test
16
+ args:
17
+ num_few_shot: 25
18
+ metrics:
19
+ - type: acc_norm
20
+ value: 53.84
21
+ name: normalized accuracy
22
+ source:
23
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=steve-cse/MelloGPT
24
+ name: Open LLM Leaderboard
25
+ - task:
26
+ type: text-generation
27
+ name: Text Generation
28
+ dataset:
29
+ name: HellaSwag (10-Shot)
30
+ type: hellaswag
31
+ split: validation
32
+ args:
33
+ num_few_shot: 10
34
+ metrics:
35
+ - type: acc_norm
36
+ value: 76.12
37
+ name: normalized accuracy
38
+ source:
39
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=steve-cse/MelloGPT
40
+ name: Open LLM Leaderboard
41
+ - task:
42
+ type: text-generation
43
+ name: Text Generation
44
+ dataset:
45
+ name: MMLU (5-Shot)
46
+ type: cais/mmlu
47
+ config: all
48
+ split: test
49
+ args:
50
+ num_few_shot: 5
51
+ metrics:
52
+ - type: acc
53
+ value: 55.99
54
+ name: accuracy
55
+ source:
56
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=steve-cse/MelloGPT
57
+ name: Open LLM Leaderboard
58
+ - task:
59
+ type: text-generation
60
+ name: Text Generation
61
+ dataset:
62
+ name: TruthfulQA (0-shot)
63
+ type: truthful_qa
64
+ config: multiple_choice
65
+ split: validation
66
+ args:
67
+ num_few_shot: 0
68
+ metrics:
69
+ - type: mc2
70
+ value: 55.61
71
+ source:
72
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=steve-cse/MelloGPT
73
+ name: Open LLM Leaderboard
74
+ - task:
75
+ type: text-generation
76
+ name: Text Generation
77
+ dataset:
78
+ name: Winogrande (5-shot)
79
+ type: winogrande
80
+ config: winogrande_xl
81
+ split: validation
82
+ args:
83
+ num_few_shot: 5
84
+ metrics:
85
+ - type: acc
86
+ value: 73.88
87
+ name: accuracy
88
+ source:
89
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=steve-cse/MelloGPT
90
+ name: Open LLM Leaderboard
91
+ - task:
92
+ type: text-generation
93
+ name: Text Generation
94
+ dataset:
95
+ name: GSM8k (5-shot)
96
+ type: gsm8k
97
+ config: main
98
+ split: test
99
+ args:
100
+ num_few_shot: 5
101
+ metrics:
102
+ - type: acc
103
+ value: 30.1
104
+ name: accuracy
105
+ source:
106
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=steve-cse/MelloGPT
107
+ name: Open LLM Leaderboard
108
  ---
109
  # MelloGPT
110
  <p align="center">
 
143
  ## Credits
144
  This is my first attempt at fine-tuning a large language model. It wouldn't be possible without [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) and [Runpod](runpod.io). The axolotl config file can be found [here](https://github.com/steve-cse/mello/blob/master/mello.yml).
145
 
146
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
147
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
148
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_steve-cse__MelloGPT)
149
+
150
+ | Metric |Value|
151
+ |---------------------------------|----:|
152
+ |Avg. |57.59|
153
+ |AI2 Reasoning Challenge (25-Shot)|53.84|
154
+ |HellaSwag (10-Shot) |76.12|
155
+ |MMLU (5-Shot) |55.99|
156
+ |TruthfulQA (0-shot) |55.61|
157
+ |Winogrande (5-shot) |73.88|
158
+ |GSM8k (5-shot) |30.10|
159
+