Datasets:

Modalities:
Text
Languages:
English
ArXiv:
DOI:
License:
BerenMillidge commited on
Commit
bec4434
1 Parent(s): 324ba38

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -66,7 +66,7 @@ According to our evaluations, Zyda is the most performant per-token open dataset
66
 
67
 
68
  <center>
69
- <img src="https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/VdrCqypZtTpjEs7bH1k9s.png" width="800" alt="Zyda performance across steps.">
70
  </center>
71
 
72
  These results are aggregate scores of classic language modeling evaluations (PIQA, WinoGrande, OpenBookQA, ARC-Easy, ARC-Challenge) across time for a 1.4B model trained on 50B tokens of each dataset.
@@ -131,7 +131,7 @@ StarCoder: https://huggingface.co/datasets/bigcode/starcoderdata
131
 
132
 
133
  <center>
134
- <img src="https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/eCJWG3ZoA4fVk8bZZBHaG.png" width="800" alt="Composition of Zyda">
135
  </center>
136
 
137
 
 
66
 
67
 
68
  <center>
69
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/VdrCqypZtTpjEs7bH1k9s.png" width="650" alt="Zyda performance across steps.">
70
  </center>
71
 
72
  These results are aggregate scores of classic language modeling evaluations (PIQA, WinoGrande, OpenBookQA, ARC-Easy, ARC-Challenge) across time for a 1.4B model trained on 50B tokens of each dataset.
 
131
 
132
 
133
  <center>
134
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/eCJWG3ZoA4fVk8bZZBHaG.png" width="650" alt="Composition of Zyda">
135
  </center>
136
 
137