Update README.md
Browse files
README.md
CHANGED
@@ -18,7 +18,13 @@ llama3-8b-spaetzle-v20 is a merge of the following models:
|
|
18 |
* [cstr/llama3-8b-spaetzle-v13](https://huggingface.co/cstr/llama3-8b-spaetzle-v13)
|
19 |
* [nbeerbower/llama-3-wissenschaft-8B-v2](https://huggingface.co/nbeerbower/llama-3-wissenschaft-8B-v2)
|
20 |
|
21 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
|
23 |
## 🧩 Configuration
|
24 |
|
|
|
18 |
* [cstr/llama3-8b-spaetzle-v13](https://huggingface.co/cstr/llama3-8b-spaetzle-v13)
|
19 |
* [nbeerbower/llama-3-wissenschaft-8B-v2](https://huggingface.co/nbeerbower/llama-3-wissenschaft-8B-v2)
|
20 |
|
21 |
+
# Benchmarks
|
22 |
+
On EQ-Bench v2_de it achieves 65.7 (171/171 parseable). From Open LLM Leaderboard ([details](https://huggingface.co/datasets/open-llm-leaderboard/details_cstr__llama3-8b-spaetzle-v20/blob/main/results_2024-05-25T12-52-23.640126.json)):
|
23 |
+
|
24 |
+
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
|
25 |
+
|----------------------------------|------------|-------|-----------|-------|------------|------------|-------|
|
26 |
+
| cstr/llama3-8b-spaetzle-v20 | 71.83 | 70.39 | 85.69 | 68.52 | 60.98 | 78.37 | 67.02 |
|
27 |
+
|
28 |
|
29 |
## 🧩 Configuration
|
30 |
|