Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:

Wikitext-103 for CLM training

#4
by Qwinpin - opened

Is this dataset is good to train from scratch LM for research purposes on small-size (less than 100M parameters) LM behavior?

I want to compare different architectures (with different training pipelines) on CLM task and further fine-tuning, is it enough to use only wikitext-103 for training?

I think so, according to Krishna et al. (2022). If you want a bigger corpus, you can of course switch to the wikipedia corpus (https://huggingface.co/datasets/wikipedia).

Krishna, K., Garg, S., Bigham, J. P., & Lipton, Z. C. (2022). Downstream datasets make surprisingly good pretraining corpora. arXiv preprint arXiv:2209.14389.

Thanks for your answer. Thanks for the article as well, looks extremely helpful

Qwinpin changed discussion status to closed

Sign up or log in to comment