Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10M - 100M
Tags:
wikipedia
metadata
language:
- en
task_categories:
- question-answering
dataset_info:
config_name: gpt-4
features:
- name: id
dtype: string
- name: title
dtype: string
- name: url
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 24166736905
num_examples: 21462234
download_size: 12274801108
dataset_size: 24166736905
configs:
- config_name: gpt-4
data_files:
- split: train
path: gpt-4/train-*
tags:
- wikipedia
This is Wikidedia passages dataset for ODQA retriever. Each passages have 256~ tokens splitteed by gpt-4 tokenizer using tiktoken.
Token count
{'~128': 1415068, '128~256': 1290011,
'256~512': 18756476, '512~1024': 667,
'1024~2048': 12, '2048~4096': 0, '4096~8192': 0,
'8192~16384': 0, '16384~32768': 0, '32768~65536': 0,
'65536~128000': 0, '128000~': 0}
Text count
{'~512': 1556876,'512~1024': 6074975, '1024~2048': 13830329,
'2048~4096': 49, '4096~8192': 2, '8192~16384': 3, '16384~32768': 0,
'32768~65536': 0, '65536~': 0}
Token percent
{'~128': '6.59%', '128~256': '6.01%', '256~512': '87.39%',
'512~1024': '0.00%', '1024~2048': '0.00%', '2048~4096': '0.00%',
'4096~8192': '0.00%', '8192~16384': '0.00%', '16384~32768': '0.00%',
'32768~65536': '0.00%', '65536~128000': '0.00%', '128000~': '0.00%'}
Text percent
{'~512': '7.25%', '512~1024': '28.31%', '1024~2048': '64.44%',
'2048~4096': '0.00%', '4096~8192': '0.00%', '8192~16384': '0.00%',
'16384~32768': '0.00%', '32768~65536': '0.00%', '65536~': '0.00%'}