File size: 14,113 Bytes
414ea2f 011c53a f0e1cf1 14e8e04 f0e1cf1 b838699 f0e1cf1 ebea69a f0e1cf1 ebea69a eabe01e ebea69a eabe01e 602dd9d ebea69a f0e1cf1 ebea69a f0e1cf1 73c4767 f0e1cf1 fbd22b8 cf686f9 fbd22b8 d6a4efb fbd22b8 cf686f9 fbd22b8 bc72843 10f8929 4228fac 10f8929 4228fac fbd22b8 10f8929 3ee6a5a 56d9ed5 3ee6a5a fbd22b8 e110f3a f549294 29a456d f0e1cf1 cae96db d28b703 29a456d d28b703 29a456d d28b703 29a456d d28b703 29a456d d28b703 29a456d d28b703 29a456d d28b703 29a456d d28b703 29a456d f0e1cf1 7989c5c bf3b4dc 7989c5c bf3b4dc 7989c5c bf3b4dc 7989c5c bf3b4dc 6ca6aaf f0e1cf1 6306358 b6e322b 6306358 b6e322b 6306358 b6e322b 6306358 b6e322b 6306358 e110f3a f549294 e658166 f0e1cf1 fa517c7 5eb35be fa517c7 3fe764a fa517c7 3fe764a f0e1cf1 d6a4efb f0e1cf1 d6a4efb f0e1cf1 d6a4efb f0e1cf1 d6a4efb e110f3a f0e1cf1 d6a4efb f0e1cf1 d6a4efb f0e1cf1 d6a4efb f0e1cf1 d6a4efb f0e1cf1 d6a4efb f0e1cf1 d6a4efb f0e1cf1 d6a4efb f0e1cf1 853f0df f0e1cf1 853f0df f0e1cf1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 |
---
annotations_creators:
- expert-generated
- machine-generated
language_creators:
- crowdsourced
languages:
- de
- en
- fi
- fr
- ru
- sv
licenses:
- cc-by-nc-4.0
multilinguality:
- multilingual
pretty_name: Opusparcus
size_categories:
- unknown
source_datasets:
- extended|open_subtitles
task_categories:
- conditional-text-generation
task_ids:
- conditional-text-generation-other-paraphrase generation
---
# Dataset Card for Opusparcus
*NB: This is the old format of the data set card. Generate a new one from the json in the repository!*
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [Language Bank of Finland – Metashare](http://urn.fi/urn:nbn:fi:lb-2018021221)
- **Paper:** [Mathias Creutz (2018): Open Subtitles Paraphrases Corpus For Six Languages](http://www.lrec-conf.org/proceedings/lrec2018/summaries/131.html)
- **Point of Contact:** Mathias Creutz (firstname dot lastname at helsinki dot fi)
### Dataset Summary
Opusparcus is a paraphrase corpus for six European languages: German,
English, Finnish, French, Russian, and Swedish. The paraphrases
consist of subtitles from movies and TV shows.
The data in Opusparcus has been extracted from
[OpenSubtitles2016](http://opus.nlpl.eu/OpenSubtitles2016.php), which
is in turn based on data from http://www.opensubtitles.org/.
For each target language, the Opusparcus data have been partitioned
into three types of data sets: training, validation and test sets. The
training sets are large, consisting of millions of sentence pairs, and
have been compiled automatically, with the help of probabilistic
ranking functions. The development and test sets consist of sentence
pairs that have been annotated manually; each set contains
approximately 1000 sentence pairs that have been verified to be
acceptable paraphrases by two indepedent annotators.
### Supported Tasks and Leaderboards
**Tasks:** Paraphrase detection and generation
**Leaderboards:** Currently there is no Leaderboard for this dataset.
### Languages
German (de), English (en), Finnish (fi), French (fr), Russian (ru), Swedish (sv)
## Dataset Structure
When you download Opusparcus, you must always indicate the language
you want to retrieve, for instance:
```
data = load_dataset("GEM/opusparcus", lang="de")
```
The above command will download the validation and test sets for
German. If additionally, you want to retrieve training data, you need
to specify the level of quality you desire, such as "French, with 90%
quality of the training data":
```
data = load_dataset("GEM/opusparcus", lang="fr", quality=90)
```
The entries in the training sets have been ranked automatically by how
likely they are paraphrases, best first, worst last. The quality
parameter indicates the estimated proportion (in percent) of true
paraphrases in the training set. Allowed quality values range between
60 and 100, in increments of 5 (60, 65, 70, ..., 100). A value of 60
means that 60% of the sentence pairs in the training set are estimated
to be true paraphrases (and the remaining 40% are not). A higher value
produces a smaller but cleaner set. The smaller sets are subsets of
the larger sets, such that the `quality=95` set is a subset of
`quality=90`, which is a subset of `quality=85`, and so on.
The default `quality` value, if omitted, is 100. This matches no
training data at all, which can be convenient, if you are only
interested in the validation and test sets, which are considerably
smaller, but manually annotated.
Note that an alternative to typing the parameter values explicitly,
you can use configuration names instead. The following commands are
equivalent to the ones above:
```
data = load_dataset("GEM/opusparcus", "de.100")
data = load_dataset("GEM/opusparcus", "fr.90")
```
Remark regarding the optimal choice of training set qualities:
Previous work suggests that a larger and noisier set is better than a
smaller and clean set. See Sjöblom et al. (2018). [Paraphrase
Detection on Noisy Subtitles in Six
Languages](http://noisy-text.github.io/2018/pdf/W-NUT20189.pdf). In
*Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on
Noisy User-generated Text*, and Vahtola et al. (2021). [Coping with
Noisy Training Data Labels in Paraphrase
Detection](https://aclanthology.org/2021.wnut-1.32/). In *Proceedings
of the 7th Workshop on Noisy User-generated Text*.
### Data Instances
As a concrete example, loading the English data requesting 95% quality of
the train split produces the following:
```
>>> data = load_dataset("GEM/opusparcus", lang="en", quality=95)
>>> data
DatasetDict({
test: Dataset({
features: ['lang', 'sent1', 'sent2', 'annot_score', 'gem_id'],
num_rows: 982
})
validation: Dataset({
features: ['lang', 'sent1', 'sent2', 'annot_score', 'gem_id'],
num_rows: 1015
})
test.full: Dataset({
features: ['lang', 'sent1', 'sent2', 'annot_score', 'gem_id'],
num_rows: 1445
})
validation.full: Dataset({
features: ['lang', 'sent1', 'sent2', 'annot_score', 'gem_id'],
num_rows: 1455
})
train: Dataset({
features: ['lang', 'sent1', 'sent2', 'annot_score', 'gem_id'],
num_rows: 1000000
})
})
>>> data["test"][0]
{'annot_score': 4.0, 'gem_id': 'gem-opusparcus-test-1587', 'lang': 'en', 'sent1': "I haven 't been contacted by anybody .", 'sent2': "Nobody 's contacted me ."}
>>> data["validation"][2]
{'annot_score': 3.0, 'gem_id': 'gem-opusparcus-validation-1586', 'lang': 'en', 'sent1': 'No promises , okay ?', 'sent2': "I 'm not promising anything ."}
>>> data["train"][1000]
{'annot_score': 0.0, 'gem_id': 'gem-opusparcus-train-12501001', 'lang': 'en', 'sent1': 'Am I beautiful ?', 'sent2': 'Am I pretty ?'}
```
### Data Fields
`sent1`: a tokenized sentence
`sent2`: another tokenized sentence, which is potentially a paraphrase of `sent1`.
`annot_score`: a value between 1.0 and 4.0 indicating how good an example of paraphrases `sent1` and `sent2` are. (For the training sets, the value is 0.0, which indicates that no manual annotation has taken place.)
`lang`: language of this dataset
`gem_id`: unique identifier of this entry
**Additional information about the annotation scheme:**
The annotation scores given by an individual annotator are:
4: Good example of paraphrases (Dark green button in the annotation
tool): The two sentences can be used in the same situation and
essentially "mean the same thing".
3: Mostly good example of paraphrases (Light green button in the
annotation tool): It is acceptable to think that the two sentences
refer to the same thing, although one sentence might be more specific
than the other one, or there are differences in style, such as polite
form versus familiar form.
2: Mostly bad example of paraphrases (Yellow button in the annotation
tool): There is some connection between the sentences that explains
why they occur together, but one would not really consider them to
mean the same thing.
1: Bad example of paraphrases (Red button in the annotation tool):
There is no obvious connection. The sentences mean different things.
If the two annotators fully agreed on the category, the value in the
`annot_score` field is 4.0, 3.0, 2.0 or 1.0. If the two annotators
chose adjacent categories, the value in this field will be 3.5, 2.5 or
1.5. For instance, a value of 2.5 means that one annotator gave a
score of 3 ("mostly good"), indicating a possible paraphrase pair,
whereas the other annotator scored this as a 2 ("mostly bad"), that
is, unlikely to be a paraphrase pair. If the annotators disagreed by
more than one category, the sentence pair was discarded and won't show
up in the datasets.
The training sets were not annotated manually. This is indicated by
the value 0.0 in the `annot_score` field.
For an assessment of of inter-annotator agreement, see Aulamo et
al. (2019). [Annotation of subtitle paraphrases using a new web
tool.](http://ceur-ws.org/Vol-2364/3_paper.pdf) In *Proceedings of the
Digital Humanities in the Nordic Countries 4th Conference*,
Copenhagen, Denmark.
### Data Splits
The data is split into training, validation and test sets. The
validation and test sets come in two versions, the regular validation
and test sets and the full sets, called validation.full and
test.full. The full sets contain all sentence pairs successfully
annotated by the annotators, including the sentence pairs that were
rejected as paraphrases. The annotation scores of the full sets thus
range between 1.0 and 4.0. The regular validation and test sets only
contain sentence pairs that qualify as paraphrases, scored between 3.0
and 4.0 by the annotators.
The number of sentence pairs in the data splits are as follows for
each of the languages. The range between the smallest (`quality=95`)
and largest (`quality=60`) train configuration have been shown.
| | train | valid | test | valid.full | test.full |
| ----- | ------ | ----- | ---- | ---------- | --------- |
| de | 0.59M .. 13M | 1013 | 1047 | 1582 | 1586 |
| en | 1.0M .. 35M | 1015 | 982 | 1455 | 1445 |
| fi | 0.48M .. 8.9M | 963 | 958 | 1760 | 1749 |
| fr | 0.94M .. 22M | 997 | 1007 | 1630 | 1674 |
| ru | 0.15M .. 15M | 1020 | 1068 | 1854 | 1855 |
| sv | 0.24M .. 4.5M | 984 | 947 | 1887 | 1901 |
## Dataset Creation
### Curation Rationale
Opusparcus was created in order to produce a *sentential* paraphrase corpus
for multiple languages containing *colloquial* language (as opposed to
news or religious text, for instance).
### Source Data
#### Initial Data Collection and Normalization
The data in Opusparcus has been extracted from
[OpenSubtitles2016](http://opus.nlpl.eu/OpenSubtitles2016.php), which
is in turn based on data from http://www.opensubtitles.org/.
The sentences have been tokenized.
#### Who are the source language producers?
The texts consist of subtitles that have been produced using
crowdsourcing.
### Annotations
#### Annotation process
The development and test sets consist of sentence
pairs that have been annotated manually; each set contains
approximately 1000 sentence pairs that have been verified to be
acceptable paraphrases by two indepedent annotators.
The `annot_score` field reflects the judgments made by the annotators.
If the annnotators fully agreed on the category (4.0: dark green, 3.0:
light green, 2.0: yellow, 1.0: red), the value of `annot_score` is
4.0, 3.0, 2.0 or 1.0. If the annotators chose adjacent categories,
the value in this field will be 3.5, 2.5 or 1.5. For instance, a
value of 2.5 means that one annotator gave a score of 3 ("mostly
good"), indicating a possible paraphrase pair, whereas the other
annotator scored this as a 2 ("mostly bad"), that is, unlikely to be a
paraphrase pair. If the annotators disagreed by more than one
category, the sentence pair was discarded and won't show up in the
datasets.
#### Who are the annotators?
Students and staff at the University of Helsinki (native or very
proficient speakers of the target languages)
### Personal and Sensitive Information
The datasets do not contain any personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
The goal of Opusparcus is to promote the support for colloquial language.
### Discussion of Biases
The data reflect the biases present in the movies and TV shows that
have been subtitled.
### Other Known Limitations
The sentence pairs in the validation and test sets have been selected
in such a manner that their Levenshtein distance (minimum edit
distance) exceeds a certain theshold. This guarantees that the manual
annotation effort focuses on "interesting" sentence pairs rather than
trivial variations (such as "It is good." vs. "It's good."). The
training sets, however, have not been prefiltered in this manner and
thus also contain highly similar sentences.
## Additional Information
### Dataset Curators
Mathias Creutz, University of Helsinki, Finland
### Licensing Information
CC-BY-NC 4.0
### Citation Information
```
@InProceedings{creutz:lrec2018,
title = {Open Subtitles Paraphrase Corpus for Six Languages},
author={Mathias Creutz},
booktitle={Proceedings of the 11th edition of the Language Resources and Evaluation Conference (LREC 2018)},
year={2018},
month = {May 7-12},
address = {Miyazaki, Japan},
editor = {Nicoletta Calzolari (Conference chair) and Khalid Choukri and Christopher Cieri and Thierry Declerck and Sara Goggi and Koiti Hasida and Hitoshi Isahara and Bente Maegaard and Joseph Mariani and Hélène Mazo and Asuncion Moreno and Jan Odijk and Stelios Piperidis and Takenobu Tokunaga},
publisher = {European Language Resources Association (ELRA)},
isbn = {979-10-95546-00-9},
language = {english},
url={http://www.lrec-conf.org/proceedings/lrec2018/pdf/131.pdf}
```
### Contributions
Thanks to [@mathiascreutz](https://github.com/mathiascreutz) for adding this dataset.
|