Datasets:

Modalities:
Text
ArXiv:
Tags:

Request to provide 1B/10B/100B/1T token subsample datasets separately

#4
by johnhew - opened

Hi RedPajama team,

Thanks for making this available. I have request:

My favorite tool you've released, as I iterate on language model development, is actually the 1.3B token sample of v1 that you provided here.

I don't have the disk space to download the whole dataset, and I'm worried that any partial download I do won't give me iid samples from the underlying distribution. The 1B subsample of your V1 dataset let me get a gist of how model development would go without committing to downloading the whole dataset. If you make iid-sampled datasets of 1B, 10B, 100B, and 1T tokens of V2 available, it would allow me to scale my 1B experiments gracefully without having to have all the disk space up front! I think others would appreciate this as well.

Thanks for considering.

Hi @johnhew , I think this is a great suggestion! Generally, you can do a partial download by sampling e.g. one file per snapshot, which should give you a sample that reflects the distribution of the entire dataset (it's just important to sample from all snapshots).

Also, the sample we provide contains ~1B raw tokens (i.e., not deduplicated), together with quality annotations. You can use this to get acquainted with the format and start building your pipeline. It comes however only from 1 snapshot so there is some bias.

In any case, I will add this request to my list, thank you for suggesting this!:)

Please 1B, 10B and 100B would be awesome!

Sign up or log in to comment