Datasets:
Tasks:
Text Generation
Modalities:
Text
Sub-tasks:
language-modeling
Languages:
English
Size:
100K - 1M
License:
zhangirazerbayev
commited on
Commit
•
b18f429
1
Parent(s):
664e7c5
Update README.md
Browse files
README.md
CHANGED
@@ -23,7 +23,7 @@ task_ids:
|
|
23 |
|
24 |
# Dataset Description
|
25 |
The `proof-pile` is a 13GB pre-training dataset of mathematical text that comprises 8.3 billion tokens (using the `gpt-neox` tokenizer). Models trained on this dataset are coming soon :) The dataset is composed of diverse sources of both informal and formal mathematics, namely
|
26 |
-
- ArXiv.math (
|
27 |
- Open-source math textbooks (50MB)
|
28 |
- Formal mathematics libraries (500MB)
|
29 |
- Lean mathlib and other Lean repositories
|
|
|
23 |
|
24 |
# Dataset Description
|
25 |
The `proof-pile` is a 13GB pre-training dataset of mathematical text that comprises 8.3 billion tokens (using the `gpt-neox` tokenizer). Models trained on this dataset are coming soon :) The dataset is composed of diverse sources of both informal and formal mathematics, namely
|
26 |
+
- ArXiv.math (10GB)
|
27 |
- Open-source math textbooks (50MB)
|
28 |
- Formal mathematics libraries (500MB)
|
29 |
- Lean mathlib and other Lean repositories
|