Datasets:
Tasks:
Text Generation
Modalities:
Text
Sub-tasks:
language-modeling
Languages:
English
Size:
100K - 1M
License:
zhangirazerbayev
commited on
Commit
•
664e7c5
1
Parent(s):
72455b2
Update README.md
Browse files
README.md
CHANGED
@@ -22,7 +22,7 @@ task_ids:
|
|
22 |
---
|
23 |
|
24 |
# Dataset Description
|
25 |
-
The `proof-pile` is a
|
26 |
- ArXiv.math (35GB)
|
27 |
- Open-source math textbooks (50MB)
|
28 |
- Formal mathematics libraries (500MB)
|
|
|
22 |
---
|
23 |
|
24 |
# Dataset Description
|
25 |
+
The `proof-pile` is a 13GB pre-training dataset of mathematical text that comprises 8.3 billion tokens (using the `gpt-neox` tokenizer). Models trained on this dataset are coming soon :) The dataset is composed of diverse sources of both informal and formal mathematics, namely
|
26 |
- ArXiv.math (35GB)
|
27 |
- Open-source math textbooks (50MB)
|
28 |
- Formal mathematics libraries (500MB)
|