File size: 6,339 Bytes
5b74f8e fe07b82 5b74f8e daeabfb 5b74f8e daeabfb 5b74f8e daeabfb 5b74f8e daeabfb 5b74f8e daeabfb 5b74f8e daeabfb 5b74f8e 0d382eb 2a7597f 0d382eb 7d03ae8 0d382eb |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 |
---
language:
- tr
license:
- cc-by-3.0
- cc-by-4.0
- cc-by-sa-3.0
- mit
- other
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
task_categories:
- feature-extraction
- sentence-similarity
pretty_name: AllNLITR
dataset_info:
- config_name: pair
features:
- name: anchor
dtype: string
- name: positive
dtype: string
splits:
- name: train
num_examples: 313601
- name: dev
num_examples: 6802
- name: test
num_examples: 6827
- config_name: pair-class
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_examples: 941086
- name: dev
num_examples: 19649
- name: test
num_examples: 19652
- config_name: pair-score
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_examples: 941086
- name: dev
num_examples: 19649
- name: test
num_examples: 19652
- config_name: triplet
features:
- name: anchor
dtype: string
- name: positive
dtype: string
- name: negative
dtype: string
splits:
- name: train
num_examples: 482091
- name: dev
num_examples: 6567
- name: test
num_examples: 6587
configs:
- config_name: pair
data_files:
- split: train
path: pair/train*
- split: dev
path: pair/dev*
- split: test
path: pair/test*
- config_name: pair-class
data_files:
- split: train
path: pair-class/train*
- split: dev
path: pair-class/dev*
- split: test
path: pair-class/test*
- config_name: pair-score
data_files:
- split: train
path: pair-score/train*
- split: dev
path: pair-score/dev*
- split: test
path: pair-score/test*
- config_name: triplet
data_files:
- split: train
path: triplet/train*
- split: dev
path: triplet/dev*
- split: test
path: triplet/test*
---
# Dataset Card for AllNLITR
This dataset is a formatted version of [NLI-TR](https://huggingface.co/datasets/boun-tabi/nli_tr) datasets, sharing the same licenses. The format is intended to be in line with [AllNLI](https://huggingface.co/datasets/sentence-transformers/all-nli) by [Sentence Transformers](https://sbert.net/) for ease of training.
Despite originally being intended for Natural Language Inference (NLI), this dataset can be used for training/finetuning an embedding model for semantic textual similarity.
## Dataset Subsets
### `pair-class` subset
* Columns: "premise", "hypothesis", "label"
* Column types: `str`, `str`, `class` with `{"0": "entailment", "1": "neutral", "2", "contradiction"}`
* Examples:
```python
{
'premise': 'A person on a horse jumps over a broken down airplane.',
'hypothesis': 'A person is training his horse for a competition.',
'label': 1,
}
```
* Collection strategy: Reading the premise, hypothesis and integer label from SNLI & MultiNLI datasets.
* Deduplified: Yes
### `pair-score` subset
* Columns: "sentence1", "sentence2", "score"
* Column types: `str`, `str`, `float`
* Examples:
```python
{
'sentence1': 'A person on a horse jumps over a broken down airplane.',
'sentence2': 'A person is training his horse for a competition.',
'score': 0.5,
}
```
* Collection strategy: Taking the `pair-class` subset and remapping "entailment", "neutral" and "contradiction" to 1.0, 0.5 and 0.0, respectively.
* Deduplified: Yes
### `pair` subset
* Columns: "anchor", "positive"
* Column types: `str`, `str`
* Examples:
```python
{
'anchor': 'A person on a horse jumps over a broken down airplane.',
'positive': 'A person is training his horse for a competition.',
}
```
* Collection strategy: Reading the SNLI & MultiNLI datasets and considering the "premise" as the "anchor" and the "hypothesis" as the "positive" if the label is "entailment". The reverse ("entailment" as "anchor" and "premise" as "positive") is not included.
* Deduplified: Yes
### `triplet` subset
* Columns: "anchor", "positive", "negative"
* Column types: `str`, `str`, `str`
* Examples:
```python
{
'anchor': 'A person on a horse jumps over a broken down airplane.',
'positive': 'A person is outdoors, on a horse.',
'negative': 'A person is at a diner, ordering an omelette.',
}
```
* Collection strategy: Reading the SNLI & MultiNLI datasets, for each "premise" making a list of entailing and contradictory sentences using the dataset labels. Then, considering all possible triplets out of these entailing and contradictory lists. The reverse ("entailment" as "anchor" and "premise" as "positive") is not included.
* Deduplified: Yes
### Citation Information
```
@inproceedings{budur-etal-2020-data,
title = "Data and Representation for Turkish Natural Language Inference",
author = "Budur, Emrah and
"{O}zçelik, Rıza and
G"{u}ng"{o}r, Tunga",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
abstract = "Large annotated datasets in NLP are overwhelmingly in English. This is an obstacle to progress in other languages. Unfortunately, obtaining new annotated resources for each task in each language would be prohibitively expensive. At the same time, commercial machine translation systems are now robust. Can we leverage these systems to translate English-language datasets automatically? In this paper, we offer a positive response for natural language inference (NLI) in Turkish. We translated two large English NLI datasets into Turkish and had a team of experts validate their translation quality and fidelity to the original labels. Using these datasets, we address core issues of representation for Turkish NLI. We find that in-language embeddings are essential and that morphological parsing can be avoided where the training set is large. Finally, we show that models trained on our machine-translated datasets are successful on human-translated evaluation sets. We share all code, models, and data publicly.",
}
``` |