File size: 5,652 Bytes
e6cbf7f 607f6e3 ac8dab6 54efebd ac8dab6 09e5dac ac8dab6 1b88a1d ac8dab6 152fd61 ac8dab6 152fd61 ac8dab6 e733a20 003be52 ac8dab6 54efebd ac8dab6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 |
---
task_categories:
- text-generation
language:
- en
tags:
- math
size_categories:
- 10B<n<100B
configs:
- config_name: arxiv
data_files:
- split: train
path: arxiv/train/*.jsonl.zst
- split: validation
path: arxiv/validation/*.jsonl.zst
- split: test
path: arxiv/test/*.jsonl.zst
- config_name: open-web-math
data_files:
- split: train
path: open-web-math/train/*.jsonl.zst
- split: validation
path: open-web-math/validation/*.jsonl.zst
- split: test
path: open-web-math/test/*.jsonl.zst
- config_name: algebraic-stack
data_files:
- split: train
path: algebraic-stack/train/*.jsonl.zst
- split: validation
path: algebraic-stack/validation/*.jsonl.zst
- split: test
path: algebraic-stack/test/*.jsonl.zst
---
<img src="proofpile_logo.jpg" width="500">
[ArXiv](http://arxiv.org/abs/2310.10631) | [Models](https://huggingface.co/EleutherAI/llemma_34b) | [Data](https://huggingface.co/datasets/EleutherAI/proof-pile-2) | [Code](https://github.com/EleutherAI/math-lm) | [Blog](https://blog.eleuther.ai/llemma/) | [Sample Explorer](https://llemma-demo.github.io/)
[Zhangir Azerbayev](https://zhangir-azerbayev.github.io/), [Hailey Schoelkopf](https://github.com/haileyschoelkopf), [Keiran Paster](https://keirp.com), [Marco Dos Santos](https://github.com/dsantosmarco), [Stephen McAleer](https://www.andrew.cmu.edu/user/smcaleer/), [Albert Q. Jiang](https://albertqjiang.github.io/), [Jia Deng](https://www.cs.princeton.edu/~jiadeng/), [Stella Biderman](https://www.stellabiderman.com/), [Sean Welleck](https://wellecks.com/)
The **Proof-Pile-2** is a 55 billion token dataset of mathematical and scientific documents. This dataset was created in order to train the [Llemma 7B](https://huggingface.co/EleutherAI/llemma_7b) and [Llemma 34B](https://huggingface.co/EleutherAI/llemma_34b) models. It consists of three subsets:
- `arxiv` (29B tokens): the ArXiv subset of [RedPajama](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
- `open-web-math` (15B tokens): The [OpenWebMath](https://huggingface.co/datasets/open-web-math/open-web-math) dataset, which contains much of the high-quality mathematical text from the internet.
- `algebraic-stack` (11B tokens): A new dataset of mathematical code, including numerical computing, computer algebra, and formal mathematics.
You can download the dataset as follows
```python
from datasets import load_dataset
ds = load_dataset("EleuetherAI/proof-pile-2")
# To load only a specific subset, pass it as an argument, e.g
ds_arxiv = load_dataset("EleutherAI/proof-pile-2", "arxiv")
```
### Schema
Each dataset row has the following structure
```python
{
"text": ..., # document text
"meta": ..., # JSON string of metadata, schema specific to data source
}
```
### Dataset Contents
For detailed documentation of the ArXiv and web subsets, refer to [RedPajama](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) and [OpenWebMath](https://huggingface.co/datasets/open-web-math/open-web-math). The following table enumerates the contents of the AlgebraicStack by programming language. The AlgebraicStack is filtered to only include documents that contain mathematics, as judged by hand-crafted, language-specific heuristics.
| Language | AlgebraicStack tokens |
|-----------|-----------------------|
| Agda | 35.2 M |
| C | 25.1 M |
| C++ | 954.1 M |
| Coq | 281.9 M |
| Fortran | 724.9 M |
| GAP | 3.6 M |
| Haskell | 9.1 M |
| Idris | 10.9 M |
| Isabelle | 1,089.7 M |
| Julia | 531.0 M |
| Jupyter | 199.1 M |
| Lean | 285.6 M |
| Maple | 2.0 M |
| Matlab | 65.8 M |
| Python | 6,098.8 M |
| R | 71.3 M |
| Tex | 567.7 M |
| **Total** | **10,955.7 M** |
### License
We do not alter the license of any of the underlying data.
### Version History
**v1.1.0**: Contains an updated version of OpenWebMath, precisely the one available at [open-web-math/open-web-math](https://huggingface.co/datasets/open-web-math/open-web-math). This version of OpenWebMath has slightly improved filtering, for example, removal of very short documents.
**v1.0.0**: The data used to train the [Llemma 7B](https://huggingface.co/EleutherAI/llemma_7b) and [Llemma 34B](https://huggingface.co/EleutherAI/llemma_34b). Uses a development version of OpenWebMath.
### Citation
For the entire Proof-Pile-2, cite
```
@misc{azerbayev2023llemma,
title={Llemma: An Open Language Model For Mathematics},
author={Zhangir Azerbayev and Hailey Schoelkopf and Keiran Paster and Marco Dos Santos and Stephen McAleer and Albert Q. Jiang and Jia Deng and Stella Biderman and Sean Welleck},
year={2023},
eprint={2310.10631},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
For the ArXiv subset, cite
```
@software{together2023redpajama,
author = {Together Computer},
title = {RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset},
month = April,
year = 2023,
url = {https://github.com/togethercomputer/RedPajama-Data}
}
```
For OpenWebMath, cite
```
@misc{paster2023openwebmath,
title={OpenWebMath: An Open Dataset of High-Quality Mathematical Web Text},
author={Keiran Paster and Marco Dos Santos and Zhangir Azerbayev and Jimmy Ba},
year={2023},
eprint={2310.06786},
archivePrefix={arXiv},
primaryClass={cs.AI}
}
```
|