Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
min-wikisplit / README.md
hpprc's picture
Update README.md
0f70796 verified
---
dataset_info:
features:
- name: id
dtype: int64
- name: complex
dtype: string
- name: simple
dtype: string
splits:
- name: train
num_bytes: 60695793
num_examples: 154582
- name: validation
num_bytes: 7569490
num_examples: 19322
- name: test
num_bytes: 7546266
num_examples: 19322
- name: all
num_bytes: 75811549
num_examples: 193226
download_size: 99574719
dataset_size: 151623098
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
- split: all
path: data/all-*
license: cc-by-sa-4.0
task_categories:
- text2text-generation
language:
- en
pretty_name: MinWikiSplit
---
# MinWikiSplit
Preprocessed version of [MinWikiSplit](https://arxiv.org/abs/1909.12131).
For detailed information on the preprocessing steps, please see [here](https://github.com/nttcslab-nlp/wikisplit-pp/src/datasets/min_wiki_split.py).
This preprocessed dataset serves as the basis for [MinWikiSplit++](https://huggingface.co/datasets/cl-nagoya/min-wikisplit-pp).
## Dataset Description
- **Repository:** https://github.com/nttcslab-nlp/wikisplit-pp
- **Used Paper:** https://arxiv.org/abs/2404.09002