Datasets:
File size: 3,276 Bytes
1abf7d1 fb80fac 1abf7d1 fb80fac 1abf7d1 fb80fac 1abf7d1 fb80fac 1abf7d1 118771d fb80fac 118771d 655799e 118771d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 |
---
dataset_info:
features:
- name: id
dtype: int64
- name: complex
dtype: string
- name: simple_reversed
dtype: string
- name: simple_tokenized
sequence: string
- name: simple_original
dtype: string
- name: entailment_prob
dtype: float64
splits:
- name: train
num_bytes: 115032683
num_examples: 139241
- name: validation
num_bytes: 14334442
num_examples: 17424
- name: test
num_bytes: 14285722
num_examples: 17412
download_size: 91848881
dataset_size: 143652847
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
license: cc-by-sa-4.0
task_categories:
- text2text-generation
language:
- en
pretty_name: MinWikiSplit++
---
# MinWikiSplit++
This dataset is the HuggingFace version of MinWikiSplit++.
MinWikiSplit++ enhances the original [MinWikiSplit](https://aclanthology.org/W19-8615/) by applying two techniques: filtering through NLI classification and sentence-order reversing, which help to remove noise and reduce hallucinations compared to the original MinWikiSplit.
The preprocessed MinWikiSplit dataset that formed the basis for this can be found [here](https://huggingface.co/datasets/cl-nagoya/min-wikisplit).
## Dataset Description
- **Repository:** https://github.com/nttcslab-nlp/wikisplit-pp
- **Paper:** https://arxiv.org/abs/2404.09002
- **Point of Contact:** [Hayato Tsukagoshi](mailto:tsukagoshi.hayato.r2@s.mail.nagoya-u.ac.jp)
## Usage
```python
import datasets as ds
dataset: ds.DatasetDict = ds.load_dataset("cl-nagoya/min-wikisplit-pp")
print(dataset)
# DatasetDict({
# train: Dataset({
# features: ['id', 'complex', 'simple_reversed', 'simple_tokenized', 'simple_original', 'entailment_prob'],
# num_rows: 139241
# })
# validation: Dataset({
# features: ['id', 'complex', 'simple_reversed', 'simple_tokenized', 'simple_original', 'entailment_prob'],
# num_rows: 17424
# })
# test: Dataset({
# features: ['id', 'complex', 'simple_reversed', 'simple_tokenized', 'simple_original', 'entailment_prob'],
# num_rows: 17412
# })
# })
```
### Data Fields
- id: The ID of the data (note that it is not compatible with the existing MinWikiSplit)
- complex: A complex sentence
- simple_reversed: Simple sentences with their order reversed
- simple_tokenized: A list of simple sentences split by [PySBD](https://github.com/nipunsadvilkar/pySBD), not reversed in order
- simple_original: Simple sentences in their original order
- entailment_prob: The average probability that each simple sentence is classified as an entailment according to the complex sentence. [DeBERTa-xxl](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli) is used for the NLI classification.
## Paper
Tsukagoshi et al., [WikiSplit++: Easy Data Refinement for Split and Rephrase](https://arxiv.org/abs/2404.09002), LREC-COLING 2024.
## License
MinWikiSplit is build upon the [WikiSplit](https://github.com/google-research-datasets/wiki-split) dataset, which is distributed under the CC-BY-SA 4.0 license.
Therefore, this dataset follows suit and is distributed under the CC-BY-SA 4.0 license.
|