--- license: cc-by-sa-3.0 dataset_info: features: - name: input_ids sequence: int32 - name: token_type_ids sequence: int8 - name: attention_mask sequence: int8 splits: - name: train num_bytes: 913008948 num_examples: 296047 - name: validation num_bytes: 1912080 num_examples: 620 - name: test num_bytes: 2164968 num_examples: 702 download_size: 243746513 dataset_size: 917085996 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* --- Caution ⚠️ : This dataset is already tokenized using the sentencepiece tokenizer from `"ibm-aimc/sigma-moe-small"`. This dataset is an adapted version from https://huggingface.co/datasets/EleutherAI/wikitext_document_level which in turn is an adapted version of the dataset in https://huggingface.co/datasets/wikitext .