File size: 507 Bytes
1fd57e2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
---
dataset_info:
features:
- name: text
dtype: string
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 27888212
num_examples: 36718
download_size: 11634178
dataset_size: 27888212
---
# Dataset Card for "llama-65b-tokenized-wikitext-2-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |