Datasets:

Languages:
English
License:
File size: 1,863 Bytes
4ff4d66
 
 
 
66c1bcf
4ff4d66
 
 
66c1bcf
4ff4d66
66c1bcf
4ff4d66
 
 
66c1bcf
 
 
 
 
 
 
4ff4d66
66c1bcf
 
4ff4d66
66c1bcf
 
 
 
 
4ff4d66
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
license: apache-2.0
language:
- en
pretty_name: 'Pretokenized Dolma: Pre-tokenized, Pre-shuffled Dolma'
size_categories:
- 100B<n<1T
---
## The Pretokenized Dolma Dataset

A pre-tokenized, pre-shuffled version of [Dolma](https://huggingface.co/datasets/allenai/dolma), the high-quality text corpus from AI2. This dataset is designed to be plug-and-play with the pico-train library.

### Overview

Key Features:
- Tokenized with [allenai/OLMo-7B-0724-hf](https://huggingface.co/allenai/OLMo-7B-0724-hf), a BPE-tokenized with a vocabulary size of 50280
- Sequence length: 2049 tokens (2048 + 1 for next-token prediction)
- Sharded into 10,000 Parquet files (~78MB each)
- 420B tokens total size (perfect for training a model for 200K steps at batch size 1024)
- Ready for streaming via datasets.load_dataset(..., streaming=True)
- Pre-shuffling ensures that the order in which data is shown to models is consistent across training runs

### How it was built
We first downloaded the full Dolma corpus and selected a random 30% subset for preprocessing. Using the OLMo tokenizer, the text was tokenized and chunked into sequences of 2049 tokens. Each document is separated by an end-of-sequence (<eos>) token.

After tokenization, we shuffled and evenly sampled from the token stream to create 100 uniform shards. These were then further divided into 10,000 smaller shards to support fast loading and parallel training. Only full-length sequences are retained to ensure consistency across samples.

The dataset is stored as Parquet files, each containing token sequences under the key input_ids. 

We release the exact scripts we use to create this dataset in our [pico-lm/pico-dataset](https://github.com/pico-lm/pico-dataset) GitHub repo.

### Usage

```
from datasets import load_dataset
dataset = load_dataset("pico-lm/pretokenized-dolma", streaming=True)
```