cstr commited on
Commit
06e68b1
1 Parent(s): c29116a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -21,9 +21,9 @@ No change in llama3 prompt format template.
21
  It should work not too bad for German or English, e.g. it achieves 64.14(.10) on EQ Bench v2_de with 170(171)/171 per [q4km GGUF](https://huggingface.co/cstr/llama3-8b-spaetzle-v13-GGUF) (bpe fixed).
22
  And for English EQ-Bench Score (v2): 75.59, Parseable: 171.
23
 
24
- | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
25
- |------------------------------|------------|--------|-----------|--------|------------|------------|--------|
26
- | cstr/llama3-8b-spaetzle-v13 | 71.26 | 68.69 | 85.05 | 68.06 | 59.43 | 79.24 | 67.1 |
27
 
28
  ## Sample output
29
 
 
21
  It should work not too bad for German or English, e.g. it achieves 64.14(.10) on EQ Bench v2_de with 170(171)/171 per [q4km GGUF](https://huggingface.co/cstr/llama3-8b-spaetzle-v13-GGUF) (bpe fixed).
22
  And for English EQ-Bench Score (v2): 75.59, Parseable: 171.
23
 
24
+ | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
25
+ |------------|--------|-----------|--------|------------|------------|--------|
26
+ | 71.26 | 68.69 | 85.05 | 68.06 | 59.43 | 79.24 | 67.1 |
27
 
28
  ## Sample output
29