Datasets:

ArXiv:
License:

Would you share the 0.28T token dataset for achieve highest scores in 7B-2x experiment?

#6
by Mars2050 - opened

Fantastic work!

I am highly impressed by this project. It is remarkable that a 7B model achieved a score of 50.8 on MMLU after being trained on just a 0.28T token dataset. Would it be possible for you to share this crucial part of the dataset?

Thank you very much for your great work and contributions.

ML Foundations org

Hi @Mars2050 ,
Thank you for your compliments.
The 0.28T tokens are randomly sampled from the DCLM-Baseline pool, which is available here.

gsmyrnis changed discussion status to closed

Hi! Thanks for your great work.

We don't have enough storage to store all the shards, and we notice that simply picking one shard does not represent the randomly sampled documents.

So is it possible to release your sampled smaller version of this dataset, as FineWeb does?

Sign up or log in to comment