Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 1,011 Bytes
33f0735
 
 
 
 
 
 
 
7c8e5ae
33f0735
 
 
 
 
 
 
33edbaf
33f0735
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
---
license: cc0-1.0
language:
- en
---
## Dataset Description

- **Repository:** https://github.com/huggingface/datablations
- **Paper:** [Scaling Data-Constrained Language Models](https://arxiv.org/abs/2305.16264)
- **Point of Contact:** [Niklas Muennighoff](mailto:n.muennighoff@gmail.com)

### Dataset Summary


Various subsets of [C4](https://huggingface.co/datasets/allenai/c4) with different numbers of tokens measured with the GPT2Tokenizer. 

This data is used in the paper [Scaling Data-Constrained Language Models](https://arxiv.org/abs/2305.16264).

Please refer to [our GitHub repository](https://github.com/huggingface/datablations) for more details.


```bibtex
@article{muennighoff2023scaling,
  title={Scaling Data-Constrained Language Models},
  author={Muennighoff, Niklas and Rush, Alexander M and Barak, Boaz and Scao, Teven Le and Piktus, Aleksandra and Tazi, Nouamane and Pyysalo, Sampo and Wolf, Thomas and Raffel, Colin},
  journal={arXiv preprint arXiv:2305.16264},
  year={2023}
}
```