leaderboard-pr-bot commited on
Commit
cb743e8
1 Parent(s): afab8e5

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +119 -3
README.md CHANGED
@@ -9,13 +9,115 @@ language:
9
  - it
10
  - ru
11
  - fi
12
-
 
13
  pipeline_tag: text-generation
14
  inference: false
15
- library_name: transformers
16
- license: other
17
  license_name: deepseek
18
  license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/d3bb741e2525dbbcc1c2f732f64682131d644d0f/LICENSE-MODEL
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  ---
20
 
21
  # OpenBuddy - Open Multilingual Chatbot
@@ -50,3 +152,17 @@ By using OpenBuddy, you agree to these terms and conditions, and acknowledge tha
50
  OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
51
 
52
  使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  - it
10
  - ru
11
  - fi
12
+ license: other
13
+ library_name: transformers
14
  pipeline_tag: text-generation
15
  inference: false
 
 
16
  license_name: deepseek
17
  license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/d3bb741e2525dbbcc1c2f732f64682131d644d0f/LICENSE-MODEL
18
+ model-index:
19
+ - name: openbuddy-deepseekcoder-33b-v16.1-32k
20
+ results:
21
+ - task:
22
+ type: text-generation
23
+ name: Text Generation
24
+ dataset:
25
+ name: AI2 Reasoning Challenge (25-Shot)
26
+ type: ai2_arc
27
+ config: ARC-Challenge
28
+ split: test
29
+ args:
30
+ num_few_shot: 25
31
+ metrics:
32
+ - type: acc_norm
33
+ value: 45.05
34
+ name: normalized accuracy
35
+ source:
36
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OpenBuddy/openbuddy-deepseekcoder-33b-v16.1-32k
37
+ name: Open LLM Leaderboard
38
+ - task:
39
+ type: text-generation
40
+ name: Text Generation
41
+ dataset:
42
+ name: HellaSwag (10-Shot)
43
+ type: hellaswag
44
+ split: validation
45
+ args:
46
+ num_few_shot: 10
47
+ metrics:
48
+ - type: acc_norm
49
+ value: 60.79
50
+ name: normalized accuracy
51
+ source:
52
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OpenBuddy/openbuddy-deepseekcoder-33b-v16.1-32k
53
+ name: Open LLM Leaderboard
54
+ - task:
55
+ type: text-generation
56
+ name: Text Generation
57
+ dataset:
58
+ name: MMLU (5-Shot)
59
+ type: cais/mmlu
60
+ config: all
61
+ split: test
62
+ args:
63
+ num_few_shot: 5
64
+ metrics:
65
+ - type: acc
66
+ value: 43.24
67
+ name: accuracy
68
+ source:
69
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OpenBuddy/openbuddy-deepseekcoder-33b-v16.1-32k
70
+ name: Open LLM Leaderboard
71
+ - task:
72
+ type: text-generation
73
+ name: Text Generation
74
+ dataset:
75
+ name: TruthfulQA (0-shot)
76
+ type: truthful_qa
77
+ config: multiple_choice
78
+ split: validation
79
+ args:
80
+ num_few_shot: 0
81
+ metrics:
82
+ - type: mc2
83
+ value: 44.49
84
+ source:
85
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OpenBuddy/openbuddy-deepseekcoder-33b-v16.1-32k
86
+ name: Open LLM Leaderboard
87
+ - task:
88
+ type: text-generation
89
+ name: Text Generation
90
+ dataset:
91
+ name: Winogrande (5-shot)
92
+ type: winogrande
93
+ config: winogrande_xl
94
+ split: validation
95
+ args:
96
+ num_few_shot: 5
97
+ metrics:
98
+ - type: acc
99
+ value: 62.19
100
+ name: accuracy
101
+ source:
102
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OpenBuddy/openbuddy-deepseekcoder-33b-v16.1-32k
103
+ name: Open LLM Leaderboard
104
+ - task:
105
+ type: text-generation
106
+ name: Text Generation
107
+ dataset:
108
+ name: GSM8k (5-shot)
109
+ type: gsm8k
110
+ config: main
111
+ split: test
112
+ args:
113
+ num_few_shot: 5
114
+ metrics:
115
+ - type: acc
116
+ value: 43.67
117
+ name: accuracy
118
+ source:
119
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=OpenBuddy/openbuddy-deepseekcoder-33b-v16.1-32k
120
+ name: Open LLM Leaderboard
121
  ---
122
 
123
  # OpenBuddy - Open Multilingual Chatbot
 
152
  OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
153
 
154
  使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
155
+
156
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
157
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenBuddy__openbuddy-deepseekcoder-33b-v16.1-32k)
158
+
159
+ | Metric |Value|
160
+ |---------------------------------|----:|
161
+ |Avg. |49.91|
162
+ |AI2 Reasoning Challenge (25-Shot)|45.05|
163
+ |HellaSwag (10-Shot) |60.79|
164
+ |MMLU (5-Shot) |43.24|
165
+ |TruthfulQA (0-shot) |44.49|
166
+ |Winogrande (5-shot) |62.19|
167
+ |GSM8k (5-shot) |43.67|
168
+