Update README.md
Browse files
README.md
CHANGED
@@ -1,12 +1,14 @@
|
|
1 |
---
|
2 |
datasets:
|
3 |
- Open-Orca/SlimOrca-Dedup
|
4 |
-
- garage-bAInd/Open-Platypus
|
5 |
- teknium/openhermes
|
6 |
- meta-math/MetaMathQA
|
7 |
-
- HuggingFaceH4/ultrachat_200k
|
8 |
- migtissera/Synthia-v1.3
|
9 |
- THUDM/AgentInstruct
|
|
|
|
|
|
|
|
|
10 |
language:
|
11 |
- en
|
12 |
library_name: transformers
|
@@ -41,7 +43,8 @@ The model was trained with compute provided by [HessianAI](https://hessian.ai/)
|
|
41 |
4. [Dataset](#dataset)
|
42 |
5. [Acknowledgements](#acknowledgements)
|
43 |
6. [Contact](#contact)
|
44 |
-
7. [
|
|
|
45 |
|
46 |
## Download
|
47 |
|
@@ -51,6 +54,8 @@ The model was trained with compute provided by [HessianAI](https://hessian.ai/)
|
|
51 |
|
52 |
## Benchmarks
|
53 |
|
|
|
|
|
54 |
This models is still an early Alpha and we can't guarantee that there isn't any contamination.
|
55 |
However, the average of **72.15** would earn the #2 spot on the HF leaderboard at the time of writing and the highest score for a >70b model yet.
|
56 |
|
@@ -66,7 +71,35 @@ However, the average of **72.15** would earn the #2 spot on the HF leaderboard a
|
|
66 |
|
67 |
We use [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard.
|
68 |
|
|
|
69 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
70 |
|
71 |
## Prompt Format
|
72 |
|
@@ -113,7 +146,11 @@ Many thanks for all dataset providers/curators!
|
|
113 |
|
114 |
Best way to reach us is on our [Discord](https://discord.gg/4pAqJP7W).
|
115 |
|
116 |
-
##
|
|
|
|
|
|
|
|
|
117 |
|
118 |
Disco 120b is a [DiscoResearch](https://huggingface.co/DiscoResearch) project and was trained by [Björn Plüster](https://huggingface.co/bjoernp). [Jan Harries](https://huggingface.co/jphme) helped with technical adivce, logistics and the Model Card and [AutoMeta](https://huggingface.co/Alignment-Lab-AI) also provided helpful technical adivce.
|
119 |
The model was trained with compute provided by [HessianAI](https://hessian.ai/) - many thanks in particular to [Patrick Schramowski](https://huggingface.co/PSaiml) for his support.
|
@@ -122,7 +159,7 @@ We are standing on the shoulders of giants; many thanks in no particular order t
|
|
122 |
|
123 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
124 |
|
125 |
-
## Disclaimer
|
126 |
|
127 |
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model.
|
128 |
-
This model should only be used for research purposes. The original Llama2 license
|
|
|
1 |
---
|
2 |
datasets:
|
3 |
- Open-Orca/SlimOrca-Dedup
|
|
|
4 |
- teknium/openhermes
|
5 |
- meta-math/MetaMathQA
|
|
|
6 |
- migtissera/Synthia-v1.3
|
7 |
- THUDM/AgentInstruct
|
8 |
+
- LeoLM/German_Songs
|
9 |
+
- LeoLM/German_Poems
|
10 |
+
- LeoLM/OpenSchnabeltier
|
11 |
+
- bjoernp/ultrachat_de
|
12 |
language:
|
13 |
- en
|
14 |
library_name: transformers
|
|
|
43 |
4. [Dataset](#dataset)
|
44 |
5. [Acknowledgements](#acknowledgements)
|
45 |
6. [Contact](#contact)
|
46 |
+
7. [About DiscoResearch](#about-discoresearch)
|
47 |
+
8. [Disclaimer](#disclaimer)
|
48 |
|
49 |
## Download
|
50 |
|
|
|
54 |
|
55 |
## Benchmarks
|
56 |
|
57 |
+
### Hugginface Leaderboard
|
58 |
+
|
59 |
This models is still an early Alpha and we can't guarantee that there isn't any contamination.
|
60 |
However, the average of **72.15** would earn the #2 spot on the HF leaderboard at the time of writing and the highest score for a >70b model yet.
|
61 |
|
|
|
71 |
|
72 |
We use [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard.
|
73 |
|
74 |
+
### FastEval
|
75 |
|
76 |
+
| Metric | Value |
|
77 |
+
|-----------------------|-------|
|
78 |
+
| GSM8K | 81.2 |
|
79 |
+
| Math | 22.3 |
|
80 |
+
| BBH | 72.9 |
|
81 |
+
| MMLU | 67.9 |
|
82 |
+
| **Avg.** | **53.3** |
|
83 |
+
|
84 |
+
### MTBench
|
85 |
+
|
86 |
+
```json
|
87 |
+
{
|
88 |
+
"first_turn": 8.45,
|
89 |
+
"second_turn": 7.45,
|
90 |
+
"categories": {
|
91 |
+
"writing": 9.4,
|
92 |
+
"roleplay": 8.65,
|
93 |
+
"reasoning": 6.85,
|
94 |
+
"math": 5.55,
|
95 |
+
"coding": 4.95,
|
96 |
+
"extraction": 9.15,
|
97 |
+
"stem": 9.225,
|
98 |
+
"humanities": 9.825
|
99 |
+
},
|
100 |
+
"average": 7.95
|
101 |
+
}
|
102 |
+
```
|
103 |
|
104 |
## Prompt Format
|
105 |
|
|
|
146 |
|
147 |
Best way to reach us is on our [Discord](https://discord.gg/4pAqJP7W).
|
148 |
|
149 |
+
## About DiscoResearch
|
150 |
+
|
151 |
+
DiscoResearch is an aspiring open research community. Disco should be a place where researchers from many communities can come together to combine their expertise and create innovative and groundbreaking LLMs. Come join our Discord, share your opinions and ideas, and advance open LLM research with us!
|
152 |
+
|
153 |
+
## Acknowledgements
|
154 |
|
155 |
Disco 120b is a [DiscoResearch](https://huggingface.co/DiscoResearch) project and was trained by [Björn Plüster](https://huggingface.co/bjoernp). [Jan Harries](https://huggingface.co/jphme) helped with technical adivce, logistics and the Model Card and [AutoMeta](https://huggingface.co/Alignment-Lab-AI) also provided helpful technical adivce.
|
156 |
The model was trained with compute provided by [HessianAI](https://hessian.ai/) - many thanks in particular to [Patrick Schramowski](https://huggingface.co/PSaiml) for his support.
|
|
|
159 |
|
160 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
161 |
|
162 |
+
## Disclaimer
|
163 |
|
164 |
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model.
|
165 |
+
This model should only be used for research purposes. The original Llama2 license and all restrictions of datasets used to train this model apply.
|