Datasets:

Languages:
English
ArXiv:
Tags:
math
File size: 4,411 Bytes
e6cbf7f
 
 
 
 
 
 
 
 
ac8dab6
 
 
 
 
 
 
1b88a1d
ac8dab6
 
 
 
 
152fd61
ac8dab6
 
 
 
 
 
 
 
 
152fd61
ac8dab6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
---
task_categories:
- text-generation
language:
- en
tags:
- math
size_categories:
- 10B<n<100B
---
<img src="proofpile_logo.jpg" width="500">

[Zhangir Azerbayev](https://zhangir-azerbayev.github.io/), [Hailey Schoelkopf](https://github.com/haileyschoelkopf), [Keiran Paster](https://keirp.com), [Marco Dos Santos](https://github.com/dsantosmarco), [Stephen McAleer](https://www.andrew.cmu.edu/user/smcaleer/), [Albert Q. Jiang](https://albertqjiang.github.io/), [Jia Deng](https://www.cs.princeton.edu/~jiadeng/), [Stella Biderman](https://www.stellabiderman.com/), [Sean Welleck](https://wellecks.com/)

[Github ](https://github.com/EleutherAI/math-lm) | [ArXiv](#)

The **Proof-Pile-2** is a 55 billion token dataset of mathematical and scientific documents. This dataset was created in order to train the [Llemma 7B](https://huggingface.co/EleutherAI/llemma_7b) and [Llemma 34B](https://huggingface.co/EleutherAI/llemma_34b) models. It consists of three subsets:
- `arxiv` (29B tokens): the ArXiv subset of [RedPajama](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
- `open-web-math` (15B tokens): The [OpenWebMath](https://huggingface.co/datasets/open-web-math/open-web-math) dataset, which contains much of the high-quality mathematical text from the internet.
- `algebraic-stack` (11B tokens): A new dataset of mathematical code, including numerical computing, computer algebra, and formal mathematics.

You can download the dataset as follows
```python
from datasets import load_dataset
ds = load_dataset("EleuetherAI/proof-pile-2")

# To load only a specific subset, pass it as an argument, e.g
ds_arxiv = load_dataset("EleutherAI/proof-pile-2", "arxiv")
```

### Schema
Each dataset row has the following structure
```python
{
  "text": ..., # document text
  "meta": ..., # JSON string of metadata, schema specific to data source
}
```

### Dataset Contents
For detailed documentation of the ArXiv and web subsets, refer to [RedPajama](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) and [OpenWebMath](https://huggingface.co/datasets/open-web-math/open-web-math). The following table enumerates the contents of the AlgebraicStack by programming language. The AlgebraicStack is filtered to only include documents that contain mathematics, as judged by hand-crafted, language-specific heuristics.

| Language  | AlgebraicStack tokens |
|-----------|-----------------------|
| Agda      | 35.2 M                |
| C         | 25.1 M                |
| C++       | 954.1 M               |
| Coq       | 281.9 M               |
| Fortran   | 724.9 M               |
| GAP       | 3.6 M                 |
| Haskell   | 9.1 M                 |
| Idris     | 10.9 M                |
| Isabelle  | 1,089.7 M             |
| Julia     | 531.0 M               |
| Jupyter   | 199.1 M               |
| Lean      | 285.6 M               |
| Maple     | 2.0 M                 |
| Matlab    | 65.8 M                |
| Python    | 6,098.8 M             |
| R         | 71.3 M                |
| Tex       | 567.7 M               |
| **Total** | **10,955.7 M**        |

### License
We do not alter the license of any of the underlying data.

### Version History
**v1.0.0**: The data used to train the [Llemma 7B](https://huggingface.co/EleutherAI/llemma_7b) and [Llemma 34B](https://huggingface.co/EleutherAI/llemma_34b). Uses a development version of OpenWebMath.

### Citation 
For the entire Proof-Pile-2, cite
```
@article{azerbayev2023llemma,
    title={Llemma: an open language model for mathematics},
    author={Zhangir Azerbayev and Hailey Schoelkopf and Keiran Paster and Marco Dos Santos and Stephen McAleer and Albert Q. Jiang and Jia Deng and Stella Biderman and Sean Welleck},
    eprint={xyz.xyz},
    archivePrefix={arXiv}
    year={2023}
}
```
For the ArXiv subset, cite
```
@software{together2023redpajama,
  author = {Together Computer},
  title = {RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset},
  month = April,
  year = 2023,
  url = {https://github.com/togethercomputer/RedPajama-Data}
}
```
For OpenWebMath, cite
```
@misc{paster2023openwebmath,
      title={OpenWebMath: An Open Dataset of High-Quality Mathematical Web Text}, 
      author={Keiran Paster and Marco Dos Santos and Zhangir Azerbayev and Jimmy Ba},
      year={2023},
      eprint={2310.06786},
      archivePrefix={arXiv},
      primaryClass={cs.AI}
}
```