Files changed (1) hide show
  1. README.md +119 -3
README.md CHANGED
@@ -1,13 +1,12 @@
1
  ---
2
- license: apache-2.0
3
  language:
4
  - en
5
  - de
6
  - fr
7
  - it
8
  - es
 
9
  library_name: transformers
10
- pipeline_tag: text-generation
11
  tags:
12
  - mistral
13
  - finetune
@@ -19,6 +18,110 @@ tags:
19
  - moe
20
  datasets:
21
  - argilla/distilabel-math-preference-dpo
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
  ---
23
 
24
  ![SauerkrautLM](https://vago-solutions.de/wp-content/uploads/2023/12/Sauerkraut_Instruct_MoE_Instruct.png "SauerkrautLM-Mixtral-8x7B")
@@ -103,4 +206,17 @@ If you are interested in customized LLMs for business applications, please get i
103
  We are also keenly seeking support and investment for our startup, VAGO solutions, where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us.
104
 
105
  ## Acknowledgement
106
- Many thanks to [argilla](https://huggingface.co/datasets/argilla) and [Huggingface](https://huggingface.co) for providing such valuable datasets to the Open-Source community. And of course a big thanks to MistralAI for providing the open source community with their latest technology!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
2
  language:
3
  - en
4
  - de
5
  - fr
6
  - it
7
  - es
8
+ license: apache-2.0
9
  library_name: transformers
 
10
  tags:
11
  - mistral
12
  - finetune
 
18
  - moe
19
  datasets:
20
  - argilla/distilabel-math-preference-dpo
21
+ pipeline_tag: text-generation
22
+ model-index:
23
+ - name: SauerkrautLM-Mixtral-8x7B-Instruct
24
+ results:
25
+ - task:
26
+ type: text-generation
27
+ name: Text Generation
28
+ dataset:
29
+ name: AI2 Reasoning Challenge (25-Shot)
30
+ type: ai2_arc
31
+ config: ARC-Challenge
32
+ split: test
33
+ args:
34
+ num_few_shot: 25
35
+ metrics:
36
+ - type: acc_norm
37
+ value: 70.48
38
+ name: normalized accuracy
39
+ source:
40
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct
41
+ name: Open LLM Leaderboard
42
+ - task:
43
+ type: text-generation
44
+ name: Text Generation
45
+ dataset:
46
+ name: HellaSwag (10-Shot)
47
+ type: hellaswag
48
+ split: validation
49
+ args:
50
+ num_few_shot: 10
51
+ metrics:
52
+ - type: acc_norm
53
+ value: 87.75
54
+ name: normalized accuracy
55
+ source:
56
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct
57
+ name: Open LLM Leaderboard
58
+ - task:
59
+ type: text-generation
60
+ name: Text Generation
61
+ dataset:
62
+ name: MMLU (5-Shot)
63
+ type: cais/mmlu
64
+ config: all
65
+ split: test
66
+ args:
67
+ num_few_shot: 5
68
+ metrics:
69
+ - type: acc
70
+ value: 71.37
71
+ name: accuracy
72
+ source:
73
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct
74
+ name: Open LLM Leaderboard
75
+ - task:
76
+ type: text-generation
77
+ name: Text Generation
78
+ dataset:
79
+ name: TruthfulQA (0-shot)
80
+ type: truthful_qa
81
+ config: multiple_choice
82
+ split: validation
83
+ args:
84
+ num_few_shot: 0
85
+ metrics:
86
+ - type: mc2
87
+ value: 65.71
88
+ source:
89
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct
90
+ name: Open LLM Leaderboard
91
+ - task:
92
+ type: text-generation
93
+ name: Text Generation
94
+ dataset:
95
+ name: Winogrande (5-shot)
96
+ type: winogrande
97
+ config: winogrande_xl
98
+ split: validation
99
+ args:
100
+ num_few_shot: 5
101
+ metrics:
102
+ - type: acc
103
+ value: 81.22
104
+ name: accuracy
105
+ source:
106
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct
107
+ name: Open LLM Leaderboard
108
+ - task:
109
+ type: text-generation
110
+ name: Text Generation
111
+ dataset:
112
+ name: GSM8k (5-shot)
113
+ type: gsm8k
114
+ config: main
115
+ split: test
116
+ args:
117
+ num_few_shot: 5
118
+ metrics:
119
+ - type: acc
120
+ value: 60.8
121
+ name: accuracy
122
+ source:
123
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct
124
+ name: Open LLM Leaderboard
125
  ---
126
 
127
  ![SauerkrautLM](https://vago-solutions.de/wp-content/uploads/2023/12/Sauerkraut_Instruct_MoE_Instruct.png "SauerkrautLM-Mixtral-8x7B")
 
206
  We are also keenly seeking support and investment for our startup, VAGO solutions, where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us.
207
 
208
  ## Acknowledgement
209
+ Many thanks to [argilla](https://huggingface.co/datasets/argilla) and [Huggingface](https://huggingface.co) for providing such valuable datasets to the Open-Source community. And of course a big thanks to MistralAI for providing the open source community with their latest technology!
210
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
211
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_VAGOsolutions__SauerkrautLM-Mixtral-8x7B-Instruct)
212
+
213
+ | Metric |Value|
214
+ |---------------------------------|----:|
215
+ |Avg. |72.89|
216
+ |AI2 Reasoning Challenge (25-Shot)|70.48|
217
+ |HellaSwag (10-Shot) |87.75|
218
+ |MMLU (5-Shot) |71.37|
219
+ |TruthfulQA (0-shot) |65.71|
220
+ |Winogrande (5-shot) |81.22|
221
+ |GSM8k (5-shot) |60.80|
222
+