|
--- |
|
license: cc0-1.0 |
|
language: |
|
- en |
|
--- |
|
## Dataset Description |
|
|
|
- **Repository:** https://github.com/huggingface/datablations |
|
- **Paper:** [Scaling Data-Constrained Language Models](https://arxiv.org/abs/2305.16264) |
|
- **Point of Contact:** [Niklas Muennighoff](mailto:n.muennighoff@gmail.com) |
|
|
|
### Dataset Summary |
|
|
|
|
|
Various subsets of [C4](https://huggingface.co/datasets/allenai/c4) with different numbers of tokens measured with the GPT2Tokenizer. |
|
|
|
This data is used in the paper [Scaling Data-Constrained Language Models](https://arxiv.org/abs/2305.16264). |
|
|
|
Please refer to [our GitHub repository](https://github.com/huggingface/datablations) for more details. |
|
|
|
|
|
```bibtex |
|
@article{muennighoff2023scaling, |
|
title={Scaling Data-Constrained Language Models}, |
|
author={Muennighoff, Niklas and Rush, Alexander M and Barak, Boaz and Scao, Teven Le and Piktus, Aleksandra and Tazi, Nouamane and Pyysalo, Sampo and Wolf, Thomas and Raffel, Colin}, |
|
journal={arXiv preprint arXiv:2305.16264}, |
|
year={2023} |
|
} |
|
``` |