--- annotations_creators: - no-annotation language: - en language_creators: - found license: [] multilinguality: - monolingual pretty_name: proof-pile size_categories: [] source_datasets: [] tags: - math - mathematics - formal-mathematics task_categories: - text-generation task_ids: - language-modeling --- Note: this repo is a WIP and does not yet implement all features described below. It is certainly not ready to be used to train a model. # Dataset card for the proof-pile The `proof-pile` is a 45GB pre-training dataset of mathematical text. The dataset is composed of diverse sources of both informal and formal mathematics, namely - ArXiv.math (40GB) - Open-source math textbooks (50MB) - Formal mathematics libraries (500MB) - Lean mathlib and other Lean repositories - Isabelle AFP - Coq mathematical components and other Coq repositories - HOL Light - set.mm - Mizar Mathematical Library - Math Overflow and Math Stack Exchange (500MB) - Wiki-style sources (50MB) - ProofWiki - Wikipedia math articles - MATH dataset (6MB)