The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

Wiki Sentences

A dataset of all english sentences in Wikipedia.

Taken from the OPTIMUS project. https://github.com/ChunyuanLI/Optimus/blob/master/download_datasets.md

The dataset is 11.8GB so best to load it using streaming:

from datasets import load_dataset
dataset = load_dataset("Fraser/wiki_sentences", split='train', streaming=True)
Downloads last month
49