Datasets:

Modalities:
Text
Languages:
English
ArXiv:
DOI:
License:
BerenMillidge commited on
Commit
2d8f69f
1 Parent(s): 340d3a0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -77,7 +77,7 @@ According to our evaluations, Zyda is the most performant per-token open dataset
77
  <img src="https://github.com/BerenMillidge/BerenMillidge.github.io/blob/master/assets/figures/Zyda/scores_across_time_smoothed.png" alt="Zyda scores across time ablations">
78
  </center>
79
 
80
- <figure style="width: 120%"> <img src="https://github.com/BerenMillidge/BerenMillidge.github.io/blob/master/assets/figures/Zyda/scores_across_time_smoothed.png"> <figcaption><em> Plot of unique tokens in the first 10k integers for the GPT2 tokenizer. </em></figcaption></figure>
81
 
82
 
83
  ## How to download
 
77
  <img src="https://github.com/BerenMillidge/BerenMillidge.github.io/blob/master/assets/figures/Zyda/scores_across_time_smoothed.png" alt="Zyda scores across time ablations">
78
  </center>
79
 
80
+ <figure style="width: 120%"> <img src="https://github.com/BerenMillidge/BerenMillidge.github.io/blob/master/assets/figures/Zyda/scores_across_time_smoothed.png"> <figcaption><em>Zyda scores across time vs other datasets </em></figcaption></figure>
81
 
82
 
83
  ## How to download