openwebtext-llama3 / README.md
chanind's picture
Update README.md
557f4b3 verified
metadata
dataset_info:
  features:
    - name: input_ids
      sequence: int32
  splits:
    - name: train
      num_bytes: 34065019260
      num_examples: 1039455
  download_size: 15973004443
  dataset_size: 34065019260
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

OpenWebTextCorpus tokenized for Llama 3

This dataset is a pre-tokenized version of the Skylion007/openwebtext dataset using the llama3 tokenizer. As such, this dataset follows the same licensing as the original openwebtext dataset.

This pre-tokenization is done as a performance optimization for using the openwebtext dataset with a Llama3 model. This dataset was created using SAELens, with the following settings:

  • context_size: 8192
  • shuffled: true
  • begin_batch_token: "bos"
  • begin_sequence_token: null
  • sequence_separator_token: "eos"
  • sae_lens_version: "3.3.0"

The eos token was used as a separator between sequences, since this resulted in the lowest loss experimentally. Ideally we would like to use the same tokenization settings as used by the original Llama3 training regime, so if you have information that the original Llama3 was trained using a different tokenization setup, please reach out!