Datasets:

Formats:
parquet
Languages:
English
Libraries:
Datasets
Dask
License:
pretokenized-dolma / README.md
rdiehlmartinez's picture
Update README with tokenizer info
01187aa verified
metadata
license: apache-2.0
language:
  - en
pretty_name: 'Pico Dataset: Pre-tokenized, Pre-shuffled Dolma'
size_categories:
  - 100B<n<1T

The Pico Dataset

A pre-tokenized, pre-shuffled version of Dolma, the high-quality text corpus from AI2.

Overview

The Pico dataset simplifies training by providing:

  • Pre-tokenized text in chunks of 2048 tokens, using the OLMo Tokenizer
  • Pre-shuffled data for consistent training
  • Streaming-friendly format
  • 420B tokens total (perfect for 200K steps at batch size 1024)

Benefits

  • Storage Efficient: No need to download the full 10TB Dolma dataset
  • Memory Efficient: Stream data directly with load_dataset(..., streaming=True)
  • Reproducible: All models see identical data in identical order
  • Fast: Skip tokenization during training
  • Simple: Minimal boilerplate code needed

Usage

  1. Set up HuggingFace credentials in .env:
HF_USERNAME=your_username
HF_TOKEN=your_token # Get from https://huggingface.co/settings/tokens
  1. Set up in python:
from datasets import load_dataset
dataset = load_dataset("pico-lm/pretokenized-dolma", streaming=True)