Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 1,278 Bytes
92874d0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cdc84fa
923bdb9
 
 
 
 
92874d0
923bdb9
0f70796
 
923bdb9
 
0f70796
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
---
dataset_info:
  features:
  - name: id
    dtype: int64
  - name: complex
    dtype: string
  - name: simple
    dtype: string
  splits:
  - name: train
    num_bytes: 60695793
    num_examples: 154582
  - name: validation
    num_bytes: 7569490
    num_examples: 19322
  - name: test
    num_bytes: 7546266
    num_examples: 19322
  - name: all
    num_bytes: 75811549
    num_examples: 193226
  download_size: 99574719
  dataset_size: 151623098
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
  - split: test
    path: data/test-*
  - split: all
    path: data/all-*
license: cc-by-sa-4.0
task_categories:
- text2text-generation
language:
- en
pretty_name: MinWikiSplit
---

# MinWikiSplit

Preprocessed version of [MinWikiSplit](https://arxiv.org/abs/1909.12131).  
For detailed information on the preprocessing steps, please see [here](https://github.com/nttcslab-nlp/wikisplit-pp/src/datasets/min_wiki_split.py).  
This preprocessed dataset serves as the basis for [MinWikiSplit++](https://huggingface.co/datasets/cl-nagoya/min-wikisplit-pp).

## Dataset Description

- **Repository:** https://github.com/nttcslab-nlp/wikisplit-pp
- **Used Paper:** https://arxiv.org/abs/2404.09002