metadata
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 34602046700
num_examples: 134457967
download_size: 13058779121
dataset_size: 34602046700
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
Dataset using the bert-cased tokenizer, cutoff sentences to 128 length (not sentence pairs), all sentence pairs extracted.
Original datasets:
- https://huggingface.co/datasets/bookcorpus
- https://huggingface.co/datasets/wikipedia Variant: 20220301.en