Datasets:

Languages:
English
Size:
n>1T
ArXiv:
License:
dolma / README.md
soldni's picture
Update README.md
617eae1
|
raw
history blame
2.65 kB
metadata
license: other
viewer: false
task_categories:
  - text-generation
language:
  - en
tags:
  - language-modeling
  - casual-lm
  - llm
pretty_name: Dolma
size_categories:
  - n>1T
extra_gated_prompt: >-
  Access to this dataset is automatically granted upon accepting the [**AI2
  ImpACT License - Medium Risk Artifacts (“MR
  Agreement”)**](https://allenai.org/licenses/impact-mr) and completing all
  fields below.
extra_gated_fields:
  Your full name: text
  Organization or entity you are affiliated with: text
  State or country you are located in: text
  Contact email: text
  Please describe your intended use of the medium risk artifact(s): text
  I AGREE to the terms and conditions of the MR Agreement above: checkbox
  I AGREE to AI2’s use of my information for legal notices and administrative matters: checkbox
  I CERTIFY that the information I have provided is true and accurate: checkbox

Dolma

Dolma's official logo. It's dolma written in yellow, round lowercase letters over a blue background.

Dolma is a dataset of 3 trillion tokens from a diverse mix of web content, academic publications, code, books, and encyclopedic materials. It is openly released under AI2’s ImpACT license as a medium risk artifact.

More information:

Summary Statistics

Source Type Gzip files (GB) Documents (millions) GPT-NeoX Tokens (billions)
CommonCrawl web 4,197 4,600 2,415
C4 web 302 364 175
peS2o academic 150 38.8 57
The Stack code 675 236 430
Project Gutenberg books 6.6 0.052 4.8
Wikipedia encyclopedic 5.8 6.1 3.6
Total 5,334 5,245 3,084