kwchoi leaderboard-pr-bot commited on
Commit
9526d7d
·
verified ·
1 Parent(s): 21b07f8

Adding Evaluation Results (#1)

Browse files

- Adding Evaluation Results (71307305aebec979ea73a8ca975264b643d71bec)


Co-authored-by: Open LLM Leaderboard PR Bot <leaderboard-pr-bot@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +119 -3
README.md CHANGED
@@ -1,8 +1,124 @@
1
  ---
 
 
2
  license: apache-2.0
3
  datasets:
4
  - argilla/ultrafeedback-binarized-preferences-cleaned
5
- language:
6
- - en
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  ---
8
- Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performanceTesting Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performanceTesting Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performanceTesting Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
  license: apache-2.0
5
  datasets:
6
  - argilla/ultrafeedback-binarized-preferences-cleaned
7
+ model-index:
8
+ - name: DPO_mistral_v01_7b_ultra_0130_1k
9
+ results:
10
+ - task:
11
+ type: text-generation
12
+ name: Text Generation
13
+ dataset:
14
+ name: AI2 Reasoning Challenge (25-Shot)
15
+ type: ai2_arc
16
+ config: ARC-Challenge
17
+ split: test
18
+ args:
19
+ num_few_shot: 25
20
+ metrics:
21
+ - type: acc_norm
22
+ value: 57.17
23
+ name: normalized accuracy
24
+ source:
25
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kwchoi/DPO_mistral_v01_7b_ultra_0130_1k
26
+ name: Open LLM Leaderboard
27
+ - task:
28
+ type: text-generation
29
+ name: Text Generation
30
+ dataset:
31
+ name: HellaSwag (10-Shot)
32
+ type: hellaswag
33
+ split: validation
34
+ args:
35
+ num_few_shot: 10
36
+ metrics:
37
+ - type: acc_norm
38
+ value: 79.16
39
+ name: normalized accuracy
40
+ source:
41
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kwchoi/DPO_mistral_v01_7b_ultra_0130_1k
42
+ name: Open LLM Leaderboard
43
+ - task:
44
+ type: text-generation
45
+ name: Text Generation
46
+ dataset:
47
+ name: MMLU (5-Shot)
48
+ type: cais/mmlu
49
+ config: all
50
+ split: test
51
+ args:
52
+ num_few_shot: 5
53
+ metrics:
54
+ - type: acc
55
+ value: 55.85
56
+ name: accuracy
57
+ source:
58
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kwchoi/DPO_mistral_v01_7b_ultra_0130_1k
59
+ name: Open LLM Leaderboard
60
+ - task:
61
+ type: text-generation
62
+ name: Text Generation
63
+ dataset:
64
+ name: TruthfulQA (0-shot)
65
+ type: truthful_qa
66
+ config: multiple_choice
67
+ split: validation
68
+ args:
69
+ num_few_shot: 0
70
+ metrics:
71
+ - type: mc2
72
+ value: 55.62
73
+ source:
74
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kwchoi/DPO_mistral_v01_7b_ultra_0130_1k
75
+ name: Open LLM Leaderboard
76
+ - task:
77
+ type: text-generation
78
+ name: Text Generation
79
+ dataset:
80
+ name: Winogrande (5-shot)
81
+ type: winogrande
82
+ config: winogrande_xl
83
+ split: validation
84
+ args:
85
+ num_few_shot: 5
86
+ metrics:
87
+ - type: acc
88
+ value: 72.85
89
+ name: accuracy
90
+ source:
91
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kwchoi/DPO_mistral_v01_7b_ultra_0130_1k
92
+ name: Open LLM Leaderboard
93
+ - task:
94
+ type: text-generation
95
+ name: Text Generation
96
+ dataset:
97
+ name: GSM8k (5-shot)
98
+ type: gsm8k
99
+ config: main
100
+ split: test
101
+ args:
102
+ num_few_shot: 5
103
+ metrics:
104
+ - type: acc
105
+ value: 26.31
106
+ name: accuracy
107
+ source:
108
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=kwchoi/DPO_mistral_v01_7b_ultra_0130_1k
109
+ name: Open LLM Leaderboard
110
  ---
111
+ Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performanceTesting Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performanceTesting Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performanceTesting Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance Testing Mistral-Instruct model with Orca DPO dataset. Trying to see the effects of DPO for own study. Used Mistral-7B-Instrcut-v0.2 model due to its good performance
112
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
113
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_kwchoi__DPO_mistral_v01_7b_ultra_0130_1k)
114
+
115
+ | Metric |Value|
116
+ |---------------------------------|----:|
117
+ |Avg. |57.83|
118
+ |AI2 Reasoning Challenge (25-Shot)|57.17|
119
+ |HellaSwag (10-Shot) |79.16|
120
+ |MMLU (5-Shot) |55.85|
121
+ |TruthfulQA (0-shot) |55.62|
122
+ |Winogrande (5-shot) |72.85|
123
+ |GSM8k (5-shot) |26.31|
124
+