id
stringlengths 2
115
| README
stringlengths 0
977k
|
---|---|
nell | ---
annotations_creators:
- machine-generated
language_creators:
- crowdsourced
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100M<n<1B
- 10M<n<100M
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-retrieval
task_ids:
- entity-linking-retrieval
- fact-checking-retrieval
paperswithcode_id: nell
pretty_name: Never Ending Language Learning (NELL)
configs:
- nell_belief
- nell_belief_sentences
- nell_candidate
- nell_candidate_sentences
tags:
- relation-extraction
- text-to-structured
- text-to-tabular
dataset_info:
- config_name: nell_belief
features:
- name: entity
dtype: string
- name: relation
dtype: string
- name: value
dtype: string
- name: iteration_of_promotion
dtype: string
- name: score
dtype: string
- name: source
dtype: string
- name: entity_literal_strings
dtype: string
- name: value_literal_strings
dtype: string
- name: best_entity_literal_string
dtype: string
- name: best_value_literal_string
dtype: string
- name: categories_for_entity
dtype: string
- name: categories_for_value
dtype: string
- name: candidate_source
dtype: string
splits:
- name: train
num_bytes: 4592559704
num_examples: 2766079
download_size: 929107246
dataset_size: 4592559704
- config_name: nell_candidate
features:
- name: entity
dtype: string
- name: relation
dtype: string
- name: value
dtype: string
- name: iteration_of_promotion
dtype: string
- name: score
dtype: string
- name: source
dtype: string
- name: entity_literal_strings
dtype: string
- name: value_literal_strings
dtype: string
- name: best_entity_literal_string
dtype: string
- name: best_value_literal_string
dtype: string
- name: categories_for_entity
dtype: string
- name: categories_for_value
dtype: string
- name: candidate_source
dtype: string
splits:
- name: train
num_bytes: 23497433060
num_examples: 32687353
download_size: 2687057812
dataset_size: 23497433060
- config_name: nell_belief_sentences
features:
- name: entity
dtype: string
- name: relation
dtype: string
- name: value
dtype: string
- name: score
dtype: string
- name: sentence
dtype: string
- name: count
dtype: int32
- name: url
dtype: string
- name: sentence_type
dtype: string
splits:
- name: train
num_bytes: 4459368426
num_examples: 21031531
download_size: 929107246
dataset_size: 4459368426
- config_name: nell_candidate_sentences
features:
- name: entity
dtype: string
- name: relation
dtype: string
- name: value
dtype: string
- name: score
dtype: string
- name: sentence
dtype: string
- name: count
dtype: int32
- name: url
dtype: string
- name: sentence_type
dtype: string
splits:
- name: train
num_bytes: 20058197787
num_examples: 100866414
download_size: 2687057812
dataset_size: 20058197787
---
# Dataset Card for Never Ending Language Learning (NELL)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
http://rtw.ml.cmu.edu/rtw/
- **Repository:**
http://rtw.ml.cmu.edu/rtw/
- **Paper:**
Never-Ending Learning.
T. Mitchell, W. Cohen, E. Hruschka, P. Talukdar, J. Betteridge, A. Carlson, B. Dalvi, M. Gardner, B. Kisiel, J. Krishnamurthy, N. Lao, K. Mazaitis, T. Mohamed, N. Nakashole, E. Platanios, A. Ritter, M. Samadi, B. Settles, R. Wang, D. Wijaya, A. Gupta, X. Chen, A. Saparov, M. Greaves, J. Welling. In Proceedings of the Conference on Artificial Intelligence (AAAI), 2015
### Dataset Summary
This dataset provides version 1115 of the belief
extracted by CMU's Never Ending Language Learner (NELL) and version
1110 of the candidate belief extracted by NELL. See
http://rtw.ml.cmu.edu/rtw/overview. NELL is an open information
extraction system that attempts to read the Clueweb09 of 500 million
web pages (http://boston.lti.cs.cmu.edu/Data/clueweb09/) and general
web searches.
The dataset has 4 configurations: nell_belief, nell_candidate,
nell_belief_sentences, and nell_candidate_sentences. nell_belief is
certainties of belief are lower. The two sentences config extracts the
CPL sentence patterns filled with the applicable 'best' literal string
for the entities filled into the sentence patterns. And also provides
sentences found using web searches containing the entities and
relationships.
There are roughly 21M entries for nell_belief_sentences, and 100M
sentences for nell_candidate_sentences.
From the NELL website:
- **Research Goal**
To build a never-ending machine learning system that acquires the ability to extract structured information from unstructured web pages. If successful, this will result in a knowledge base (i.e., a relational database) of structured information that mirrors the content of the Web. We call this system NELL (Never-Ending Language Learner).
- **Approach**
The inputs to NELL include (1) an initial ontology defining hundreds of categories (e.g., person, sportsTeam, fruit, emotion) and relations (e.g., playsOnTeam(athlete,sportsTeam), playsInstrument(musician,instrument)) that NELL is expected to read about, and (2) 10 to 15 seed examples of each category and relation.
Given these inputs, plus a collection of 500 million web pages and access to the remainder of the web through search engine APIs, NELL runs 24 hours per day, continuously, to perform two ongoing tasks:
Extract new instances of categories and relations. In other words, find noun phrases that represent new examples of the input categories (e.g., "Barack Obama" is a person and politician), and find pairs of noun phrases that correspond to instances of the input relations (e.g., the pair "Jason Giambi" and "Yankees" is an instance of the playsOnTeam relation). These new instances are added to the growing knowledge base of structured beliefs.
Learn to read better than yesterday. NELL uses a variety of methods to extract beliefs from the web. These are retrained, using the growing knowledge base as a self-supervised collection of training examples. The result is a semi-supervised learning method that couples the training of hundreds of different extraction methods for a wide range of categories and relations. Much of NELL’s current success is due to its algorithm for coupling the simultaneous training of many extraction methods.
For more information, see: http://rtw.ml.cmu.edu/rtw/resources
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
en, and perhaps some others
## Dataset Structure
### Data Instances
There are four configurations for the dataset: nell_belief, nell_candidate, nell_belief_sentences, nell_candidate_sentences.
nell_belief and nell_candidate defines:
``
{'best_entity_literal_string': 'Aspect Medical Systems',
'best_value_literal_string': '',
'candidate_source': '%5BSEAL-Iter%3A215-2011%2F02%2F26-04%3A27%3A09-%3Ctoken%3Daspect_medical_systems%2Cbiotechcompany%3E-From%3ACategory%3Abiotechcompany-using-KB+http%3A%2F%2Fwww.unionegroup.com%2Fhealthcare%2Fmfg_info.htm+http%3A%2F%2Fwww.conventionspc.com%2Fcompanies.html%2C+CPL-Iter%3A1103-2018%2F03%2F08-15%3A32%3A34-%3Ctoken%3Daspect_medical_systems%2Cbiotechcompany%3E-grant+support+from+_%092%09research+support+from+_%094%09unrestricted+educational+grant+from+_%092%09educational+grant+from+_%092%09research+grant+support+from+_%091%09various+financial+management+positions+at+_%091%5D',
'categories_for_entity': 'concept:biotechcompany',
'categories_for_value': 'concept:company',
'entity': 'concept:biotechcompany:aspect_medical_systems',
'entity_literal_strings': '"Aspect Medical Systems" "aspect medical systems"',
'iteration_of_promotion': '1103',
'relation': 'generalizations',
'score': '0.9244426550775064',
'source': 'MBL-Iter%3A1103-2018%2F03%2F18-01%3A35%3A42-From+ErrorBasedIntegrator+%28SEAL%28aspect_medical_systems%2Cbiotechcompany%29%2C+CPL%28aspect_medical_systems%2Cbiotechcompany%29%29',
'value': 'concept:biotechcompany',
'value_literal_strings': ''}
``
nell_belief_sentences, nell_candidate_sentences defines:
``
{'count': 4,
'entity': 'biotechcompany:aspect_medical_systems',
'relation': 'generalizations',
'score': '0.9244426550775064',
'sentence': 'research support from [[ Aspect Medical Systems ]]',
'sentence_type': 'CPL',
'url': '',
'value': 'biotechcompany'}
``
### Data Fields
For nell_belief and nell_canddiate configurations. From http://rtw.ml.cmu.edu/rtw/faq:
* entity: The Entity part of the (Entity, Relation, Value) tripple. Note that this will be the name of a concept and is not the literal string of characters seen by NELL from some text source, nor does it indicate the category membership of that concept
* relation: The Relation part of the (Entity, Relation, Value) tripple. In the case of a category instance, this will be "generalizations". In the case of a relation instance, this will be the name of the relation.
* value: The Value part of the (Entity, Relation, Value) tripple. In the case of a category instance, this will be the name of the category. In the case of a relation instance, this will be another concept (like Entity).
* iteration_of_promotion: The point in NELL's life at which this category or relation instance was promoted to one that NELL beleives to be true. This is a non-negative integer indicating the number of iterations of bootstrapping NELL had gone through.
* score: A confidence score for the belief. Note that NELL's scores are not actually probabilistic at this time.
* source: A summary of the provenance for the belief indicating the set of learning subcomponents (CPL, SEAL, etc.) that had submitted this belief as being potentially true.
* entity_literal_strings: The set of actual textual strings that NELL has read that it believes can refer to the concept indicated in the Entity column.
* value_literal_strings: For relations, the set of actual textual strings that NELL has read that it believes can refer to the concept indicated in the Value column. For categories, this should be empty but may contain something spurious.
* best_entity_literal_string: Of the set of strings in the Entity literalStrings, column, which one string can best be used to describe the concept.
* best_value_literal_string: Same thing, but for Value literalStrings.
* categories_for_entity: The full set of categories (which may be empty) to which NELL belives the concept indicated in the Entity column to belong.
* categories_for_value: For relations, the full set of categories (which may be empty) to which NELL believes the concept indicated in the Value column to belong. For categories, this should be empty but may contain something spurious.
* candidate_source: A free-form amalgamation of more specific provenance information describing the justification(s) NELL has for possibly believing this category or relation instance.
For the nell_belief_sentences and nell_candidate_sentences, we have extracted the underlying sentences, sentence count and URLs and provided a shortened version of the entity, relation and value field by removing the string "concept:" and "candidate:". There are two types of sentences, 'CPL' and 'OE', which are generated by two of the modules of NELL, pattern matching and open web searching, respectively. There may be duplicates. The configuration is as follows:
* entity: The Entity part of the (Entity, Relation, Value) tripple. Note that this will be the name of a concept and is not the literal string of characters seen by NELL from some text source, nor does it indicate the category membership of that concept
* relation: The Relation part of the (Entity, Relation, Value) tripple. In the case of a category instance, this will be "generalizations". In the case of a relation instance, this will be the name of the relation.
* value: The Value part of the (Entity, Relation, Value) tripple. In the case of a category instance, this will be the name of the category. In the case of a relation instance, this will be another concept (like Entity).
* score: A confidence score for the belief. Note that NELL's scores are not actually probabilistic at this time.
* sentence: the raw sentence. For 'CPL' type sentences, there are "[[" "]]" arounds the entity and value. For 'OE' type sentences, there are no "[[" and "]]".
* url: the url if there is one from which this sentence was extracted
* count: the count for this sentence
* sentence_type: either 'CPL' or 'OE'
### Data Splits
There are no splits.
## Dataset Creation
### Curation Rationale
This dataset was gathered and created over many years of running the NELL system on web data.
### Source Data
#### Initial Data Collection and Normalization
See the research paper on NELL. NELL searches a subset of the web
(Clueweb09) and the open web using various open information extraction
algorithms, including pattern matching.
#### Who are the source language producers?
The NELL authors at Carnegie Mellon Univiersty and data from Cluebweb09 and the open web.
### Annotations
#### Annotation process
The various open information extraction modules of NELL.
#### Who are the annotators?
Machine annotated.
### Personal and Sensitive Information
Unkown, but likely there are names of famous individuals.
## Considerations for Using the Data
### Social Impact of Dataset
The goal for the work is to help machines learn to read and understand the web.
### Discussion of Biases
Since the data is gathered from the web, there is likely to be biased text and relationships.
[More Information Needed]
### Other Known Limitations
The relationships and concepts gathered from NELL are not 100% accurate, and there could be errors (maybe as high as 30% error).
See https://en.wikipedia.org/wiki/Never-Ending_Language_Learning
We did not 'tag' the entity and value in the 'OE' sentences, and this might be an extension in the future.
## Additional Information
### Dataset Curators
The authors of NELL at Carnegie Mellon Univeristy
### Licensing Information
There does not appear to be a license on http://rtw.ml.cmu.edu/rtw/resources. The data is made available by CMU on the web.
### Citation Information
@inproceedings{mitchell2015,
added-at = {2015-01-27T15:35:24.000+0100},
author = {Mitchell, T. and Cohen, W. and Hruscha, E. and Talukdar, P. and Betteridge, J. and Carlson, A. and Dalvi, B. and Gardner, M. and Kisiel, B. and Krishnamurthy, J. and Lao, N. and Mazaitis, K. and Mohammad, T. and Nakashole, N. and Platanios, E. and Ritter, A. and Samadi, M. and Settles, B. and Wang, R. and Wijaya, D. and Gupta, A. and Chen, X. and Saparov, A. and Greaves, M. and Welling, J.},
biburl = {https://www.bibsonomy.org/bibtex/263070703e6bb812852cca56574aed093/hotho},
booktitle = {AAAI},
description = {Papers by William W. Cohen},
interhash = {52d0d71f6f5b332dabc1412f18e3a93d},
intrahash = {63070703e6bb812852cca56574aed093},
keywords = {learning nell ontology semantic toread},
note = {: Never-Ending Learning in AAAI-2015},
timestamp = {2015-01-27T15:35:24.000+0100},
title = {Never-Ending Learning},
url = {http://www.cs.cmu.edu/~wcohen/pubs.html},
year = 2015
}
### Contributions
Thanks to [@ontocord](https://github.com/ontocord) for adding this dataset. |
neural_code_search | ---
pretty_name: Neural Code Search
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
- n<1K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: neural-code-search-evaluation-dataset
configs:
- evaluation_dataset
- search_corpus
dataset_info:
- config_name: evaluation_dataset
features:
- name: stackoverflow_id
dtype: int32
- name: question
dtype: string
- name: question_url
dtype: string
- name: question_author
dtype: string
- name: question_author_url
dtype: string
- name: answer
dtype: string
- name: answer_url
dtype: string
- name: answer_author
dtype: string
- name: answer_author_url
dtype: string
- name: examples
sequence: int32
- name: examples_url
sequence: string
splits:
- name: train
num_bytes: 296848
num_examples: 287
download_size: 383625
dataset_size: 296848
- config_name: search_corpus
features:
- name: id
dtype: int32
- name: filepath
dtype: string
- name: method_name
dtype: string
- name: start_line
dtype: int32
- name: end_line
dtype: int32
- name: url
dtype: string
splits:
- name: train
num_bytes: 1452630278
num_examples: 4716814
download_size: 121112543
dataset_size: 1452630278
---
# Dataset Card for Neural Code Search
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
[facebookresearch
/
Neural-Code-Search-Evaluation-Dataset](https://github.com/facebookresearch/Neural-Code-Search-Evaluation-Dataset/tree/master/data)
- **Repository:**
[Github](https://github.com/facebookresearch/Neural-Code-Search-Evaluation-Dataset.git)
- **Paper:**
[arXiv](https://arxiv.org/pdf/1908.09804.pdf)
### Dataset Summary
Neural-Code-Search-Evaluation-Dataset presents an evaluation dataset consisting of natural language query and code snippet pairs, with the hope that future work in this area can use this dataset as a common benchmark. We also provide the results of two code search models (NCS, UNIF) from recent work.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
EN - English
## Dataset Structure
### Data Instances
#### Search Corpus
The search corpus is indexed using all method bodies parsed from the 24,549 GitHub repositories. In total, there are 4,716,814 methods in this corpus. The code search model will find relevant code snippets (i.e. method bodies) from this corpus given a natural language query. In this data release, we will provide the following information for each method in the corpus:
#### Evaluation Dataset
The evaluation dataset is composed of 287 Stack Overflow question and answer pairs
### Data Fields
#### Search Corpus
- id: Each method in the corpus has a unique numeric identifier. This ID number will also be referenced in our evaluation dataset.
- filepath: The file path is in the format of :owner/:repo/relative-file-path-to-the-repo
method_name
- start_line: Starting line number of the method in the file.
- end_line: Ending line number of the method in the file.
- url: GitHub link to the method body with commit ID and line numbers encoded.
#### Evaluation Dataset
- stackoverflow_id: Stack Overflow post ID.
- question: Title fo the Stack Overflow post.
- question_url: URL of the Stack Overflow post.
- answer: Code snippet answer to the question.
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The most popular Android repositories on GitHub (ranked by the number of stars) is used to create the search corpus. For each repository that we indexed, we provide the link, specific to the commit that was used.5 In total, there are 24,549 repositories.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
Hongyu Li, Seohyun Kim and Satish Chandra
### Licensing Information
CC-BY-NC 4.0 (Attr Non-Commercial Inter.)
### Citation Information
arXiv:1908.09804 [cs.SE]
### Contributions
Thanks to [@vinaykudari](https://github.com/vinaykudari) for adding this dataset. |
news_commentary | ---
annotations_creators:
- found
language_creators:
- found
language:
- ar
- cs
- de
- en
- es
- fr
- it
- ja
- nl
- pt
- ru
- zh
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: NewsCommentary
dataset_info:
- config_name: ar-cs
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ar
- cs
splits:
- name: train
num_bytes: 51546460
num_examples: 52128
download_size: 16242918
dataset_size: 51546460
- config_name: ar-de
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ar
- de
splits:
- name: train
num_bytes: 69681419
num_examples: 68916
download_size: 21446768
dataset_size: 69681419
- config_name: cs-de
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- cs
- de
splits:
- name: train
num_bytes: 57470799
num_examples: 172706
download_size: 21623462
dataset_size: 57470799
- config_name: ar-en
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ar
- en
splits:
- name: train
num_bytes: 80655273
num_examples: 83187
download_size: 24714354
dataset_size: 80655273
- config_name: cs-en
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- cs
- en
splits:
- name: train
num_bytes: 54487874
num_examples: 177278
download_size: 20636368
dataset_size: 54487874
- config_name: de-en
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- en
splits:
- name: train
num_bytes: 73085451
num_examples: 223153
download_size: 26694093
dataset_size: 73085451
- config_name: ar-es
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ar
- es
splits:
- name: train
num_bytes: 79255985
num_examples: 78074
download_size: 24027435
dataset_size: 79255985
- config_name: cs-es
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- cs
- es
splits:
- name: train
num_bytes: 56794825
num_examples: 170489
download_size: 20994380
dataset_size: 56794825
- config_name: de-es
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- es
splits:
- name: train
num_bytes: 74708740
num_examples: 209839
download_size: 26653320
dataset_size: 74708740
- config_name: en-es
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- es
splits:
- name: train
num_bytes: 78600789
num_examples: 238872
download_size: 28106064
dataset_size: 78600789
- config_name: ar-fr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ar
- fr
splits:
- name: train
num_bytes: 71035061
num_examples: 69157
download_size: 21465481
dataset_size: 71035061
- config_name: cs-fr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- cs
- fr
splits:
- name: train
num_bytes: 50364837
num_examples: 148578
download_size: 18483528
dataset_size: 50364837
- config_name: de-fr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- fr
splits:
- name: train
num_bytes: 67083899
num_examples: 185442
download_size: 23779967
dataset_size: 67083899
- config_name: en-fr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- fr
splits:
- name: train
num_bytes: 70340014
num_examples: 209479
download_size: 24982452
dataset_size: 70340014
- config_name: es-fr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- fr
splits:
- name: train
num_bytes: 71025933
num_examples: 195241
download_size: 24693126
dataset_size: 71025933
- config_name: ar-it
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ar
- it
splits:
- name: train
num_bytes: 17413450
num_examples: 17227
download_size: 5186438
dataset_size: 17413450
- config_name: cs-it
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- cs
- it
splits:
- name: train
num_bytes: 10441845
num_examples: 30547
download_size: 3813656
dataset_size: 10441845
- config_name: de-it
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- it
splits:
- name: train
num_bytes: 13993454
num_examples: 38961
download_size: 4933419
dataset_size: 13993454
- config_name: en-it
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- it
splits:
- name: train
num_bytes: 14213972
num_examples: 40009
download_size: 4960768
dataset_size: 14213972
- config_name: es-it
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- it
splits:
- name: train
num_bytes: 15139636
num_examples: 41497
download_size: 5215173
dataset_size: 15139636
- config_name: fr-it
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fr
- it
splits:
- name: train
num_bytes: 14216079
num_examples: 38485
download_size: 4867267
dataset_size: 14216079
- config_name: ar-ja
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ar
- ja
splits:
- name: train
num_bytes: 661992
num_examples: 569
download_size: 206664
dataset_size: 661992
- config_name: cs-ja
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- cs
- ja
splits:
- name: train
num_bytes: 487902
num_examples: 622
download_size: 184374
dataset_size: 487902
- config_name: de-ja
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- ja
splits:
- name: train
num_bytes: 465575
num_examples: 582
download_size: 171371
dataset_size: 465575
- config_name: en-ja
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- ja
splits:
- name: train
num_bytes: 485484
num_examples: 637
download_size: 178451
dataset_size: 485484
- config_name: es-ja
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- ja
splits:
- name: train
num_bytes: 484463
num_examples: 602
download_size: 175281
dataset_size: 484463
- config_name: fr-ja
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fr
- ja
splits:
- name: train
num_bytes: 418188
num_examples: 519
download_size: 151400
dataset_size: 418188
- config_name: ar-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ar
- nl
splits:
- name: train
num_bytes: 9054134
num_examples: 9047
download_size: 2765542
dataset_size: 9054134
- config_name: cs-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- cs
- nl
splits:
- name: train
num_bytes: 5860976
num_examples: 17358
download_size: 2174494
dataset_size: 5860976
- config_name: de-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- nl
splits:
- name: train
num_bytes: 7645565
num_examples: 21439
download_size: 2757414
dataset_size: 7645565
- config_name: en-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- nl
splits:
- name: train
num_bytes: 7316599
num_examples: 19399
download_size: 2575916
dataset_size: 7316599
- config_name: es-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- nl
splits:
- name: train
num_bytes: 7560123
num_examples: 21012
download_size: 2674557
dataset_size: 7560123
- config_name: fr-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fr
- nl
splits:
- name: train
num_bytes: 7603503
num_examples: 20898
download_size: 2659946
dataset_size: 7603503
- config_name: it-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- it
- nl
splits:
- name: train
num_bytes: 5380912
num_examples: 15428
download_size: 1899094
dataset_size: 5380912
- config_name: ar-pt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ar
- pt
splits:
- name: train
num_bytes: 11340074
num_examples: 11433
download_size: 3504173
dataset_size: 11340074
- config_name: cs-pt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- cs
- pt
splits:
- name: train
num_bytes: 6183725
num_examples: 18356
download_size: 2310039
dataset_size: 6183725
- config_name: de-pt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- pt
splits:
- name: train
num_bytes: 7699083
num_examples: 21884
download_size: 2794173
dataset_size: 7699083
- config_name: en-pt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- pt
splits:
- name: train
num_bytes: 9238819
num_examples: 25929
download_size: 3310748
dataset_size: 9238819
- config_name: es-pt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- pt
splits:
- name: train
num_bytes: 9195685
num_examples: 25551
download_size: 3278814
dataset_size: 9195685
- config_name: fr-pt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fr
- pt
splits:
- name: train
num_bytes: 9261169
num_examples: 25642
download_size: 3254925
dataset_size: 9261169
- config_name: it-pt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- it
- pt
splits:
- name: train
num_bytes: 3988570
num_examples: 11407
download_size: 1397344
dataset_size: 3988570
- config_name: nl-pt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- nl
- pt
splits:
- name: train
num_bytes: 3612339
num_examples: 10598
download_size: 1290715
dataset_size: 3612339
- config_name: ar-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ar
- ru
splits:
- name: train
num_bytes: 105804303
num_examples: 84455
download_size: 28643600
dataset_size: 105804303
- config_name: cs-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- cs
- ru
splits:
- name: train
num_bytes: 71185695
num_examples: 161133
download_size: 21917168
dataset_size: 71185695
- config_name: de-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- ru
splits:
- name: train
num_bytes: 81812014
num_examples: 175905
download_size: 24610973
dataset_size: 81812014
- config_name: en-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- ru
splits:
- name: train
num_bytes: 83282480
num_examples: 190104
download_size: 24849511
dataset_size: 83282480
- config_name: es-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- ru
splits:
- name: train
num_bytes: 84345850
num_examples: 180217
download_size: 24883942
dataset_size: 84345850
- config_name: fr-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fr
- ru
splits:
- name: train
num_bytes: 75967253
num_examples: 160740
download_size: 22385777
dataset_size: 75967253
- config_name: it-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- it
- ru
splits:
- name: train
num_bytes: 12915073
num_examples: 27267
download_size: 3781318
dataset_size: 12915073
- config_name: ja-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ja
- ru
splits:
- name: train
num_bytes: 596166
num_examples: 586
download_size: 184791
dataset_size: 596166
- config_name: nl-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- nl
- ru
splits:
- name: train
num_bytes: 8933805
num_examples: 19112
download_size: 2662250
dataset_size: 8933805
- config_name: pt-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- pt
- ru
splits:
- name: train
num_bytes: 8645475
num_examples: 18458
download_size: 2584012
dataset_size: 8645475
- config_name: ar-zh
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ar
- zh
splits:
- name: train
num_bytes: 65483204
num_examples: 66021
download_size: 21625859
dataset_size: 65483204
- config_name: cs-zh
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- cs
- zh
splits:
- name: train
num_bytes: 29971192
num_examples: 45424
download_size: 12495392
dataset_size: 29971192
- config_name: de-zh
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- zh
splits:
- name: train
num_bytes: 39044704
num_examples: 59020
download_size: 15773631
dataset_size: 39044704
- config_name: en-zh
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- zh
splits:
- name: train
num_bytes: 44596087
num_examples: 69206
download_size: 18101984
dataset_size: 44596087
- config_name: es-zh
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- zh
splits:
- name: train
num_bytes: 43940013
num_examples: 65424
download_size: 17424938
dataset_size: 43940013
- config_name: fr-zh
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fr
- zh
splits:
- name: train
num_bytes: 40144071
num_examples: 59060
download_size: 15817862
dataset_size: 40144071
- config_name: it-zh
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- it
- zh
splits:
- name: train
num_bytes: 9676756
num_examples: 14652
download_size: 3799012
dataset_size: 9676756
- config_name: ja-zh
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ja
- zh
splits:
- name: train
num_bytes: 462685
num_examples: 570
download_size: 181924
dataset_size: 462685
- config_name: nl-zh
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- nl
- zh
splits:
- name: train
num_bytes: 5509070
num_examples: 8433
download_size: 2218937
dataset_size: 5509070
- config_name: pt-zh
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- pt
- zh
splits:
- name: train
num_bytes: 7152774
num_examples: 10873
download_size: 2889296
dataset_size: 7152774
- config_name: ru-zh
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ru
- zh
splits:
- name: train
num_bytes: 43112824
num_examples: 47687
download_size: 14225498
dataset_size: 43112824
---
# Dataset Card for NewsCommentary
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/News-Commentary.php
- **Repository:** None
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. |
newsgroup | ---
annotations_creators:
- found
language:
- en
language_creators:
- found
license:
- unknown
multilinguality:
- monolingual
pretty_name: 20 Newsgroups
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
paperswithcode_id: 20-newsgroups
dataset_info:
- config_name: 18828_alt.atheism
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1669511
num_examples: 799
download_size: 14666916
dataset_size: 1669511
- config_name: 18828_comp.graphics
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1661199
num_examples: 973
download_size: 14666916
dataset_size: 1661199
- config_name: 18828_comp.os.ms-windows.misc
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2378739
num_examples: 985
download_size: 14666916
dataset_size: 2378739
- config_name: 18828_comp.sys.ibm.pc.hardware
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1185187
num_examples: 982
download_size: 14666916
dataset_size: 1185187
- config_name: 18828_comp.sys.mac.hardware
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1056264
num_examples: 961
download_size: 14666916
dataset_size: 1056264
- config_name: 18828_comp.windows.x
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1876297
num_examples: 980
download_size: 14666916
dataset_size: 1876297
- config_name: 18828_misc.forsale
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 925124
num_examples: 972
download_size: 14666916
dataset_size: 925124
- config_name: 18828_rec.autos
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1295307
num_examples: 990
download_size: 14666916
dataset_size: 1295307
- config_name: 18828_rec.motorcycles
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1206491
num_examples: 994
download_size: 14666916
dataset_size: 1206491
- config_name: 18828_rec.sport.baseball
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1369551
num_examples: 994
download_size: 14666916
dataset_size: 1369551
- config_name: 18828_rec.sport.hockey
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1758094
num_examples: 999
download_size: 14666916
dataset_size: 1758094
- config_name: 18828_sci.crypt
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2050727
num_examples: 991
download_size: 14666916
dataset_size: 2050727
- config_name: 18828_sci.electronics
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1237175
num_examples: 981
download_size: 14666916
dataset_size: 1237175
- config_name: 18828_sci.med
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1886363
num_examples: 990
download_size: 14666916
dataset_size: 1886363
- config_name: 18828_sci.space
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1812803
num_examples: 987
download_size: 14666916
dataset_size: 1812803
- config_name: 18828_soc.religion.christian
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2307486
num_examples: 997
download_size: 14666916
dataset_size: 2307486
- config_name: 18828_talk.politics.guns
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1922992
num_examples: 910
download_size: 14666916
dataset_size: 1922992
- config_name: 18828_talk.politics.mideast
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2910324
num_examples: 940
download_size: 14666916
dataset_size: 2910324
- config_name: 18828_talk.politics.misc
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2102809
num_examples: 775
download_size: 14666916
dataset_size: 2102809
- config_name: 18828_talk.religion.misc
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1374261
num_examples: 628
download_size: 14666916
dataset_size: 1374261
- config_name: 19997_alt.atheism
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2562277
num_examples: 1000
download_size: 17332201
dataset_size: 2562277
- config_name: 19997_comp.graphics
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2181673
num_examples: 1000
download_size: 17332201
dataset_size: 2181673
- config_name: 19997_comp.os.ms-windows.misc
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2898760
num_examples: 1000
download_size: 17332201
dataset_size: 2898760
- config_name: 19997_comp.sys.ibm.pc.hardware
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1671166
num_examples: 1000
download_size: 17332201
dataset_size: 1671166
- config_name: 19997_comp.sys.mac.hardware
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1580881
num_examples: 1000
download_size: 17332201
dataset_size: 1580881
- config_name: 19997_comp.windows.x
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2418273
num_examples: 1000
download_size: 17332201
dataset_size: 2418273
- config_name: 19997_misc.forsale
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1412012
num_examples: 1000
download_size: 17332201
dataset_size: 1412012
- config_name: 19997_rec.autos
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1780502
num_examples: 1000
download_size: 17332201
dataset_size: 1780502
- config_name: 19997_rec.motorcycles
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1677964
num_examples: 1000
download_size: 17332201
dataset_size: 1677964
- config_name: 19997_rec.sport.baseball
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1835432
num_examples: 1000
download_size: 17332201
dataset_size: 1835432
- config_name: 19997_rec.sport.hockey
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2207282
num_examples: 1000
download_size: 17332201
dataset_size: 2207282
- config_name: 19997_sci.crypt
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2607835
num_examples: 1000
download_size: 17332201
dataset_size: 2607835
- config_name: 19997_sci.electronics
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1732199
num_examples: 1000
download_size: 17332201
dataset_size: 1732199
- config_name: 19997_sci.med
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2388789
num_examples: 1000
download_size: 17332201
dataset_size: 2388789
- config_name: 19997_sci.space
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2351411
num_examples: 1000
download_size: 17332201
dataset_size: 2351411
- config_name: 19997_soc.religion.christian
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2743018
num_examples: 997
download_size: 17332201
dataset_size: 2743018
- config_name: 19997_talk.politics.guns
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2639343
num_examples: 1000
download_size: 17332201
dataset_size: 2639343
- config_name: 19997_talk.politics.mideast
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 3695931
num_examples: 1000
download_size: 17332201
dataset_size: 3695931
- config_name: 19997_talk.politics.misc
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 3169183
num_examples: 1000
download_size: 17332201
dataset_size: 3169183
- config_name: 19997_talk.religion.misc
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2658700
num_examples: 1000
download_size: 17332201
dataset_size: 2658700
- config_name: bydate_alt.atheism
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1042224
num_examples: 480
- name: test
num_bytes: 702920
num_examples: 319
download_size: 14464277
dataset_size: 1745144
- config_name: bydate_comp.graphics
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 911665
num_examples: 584
- name: test
num_bytes: 849632
num_examples: 389
download_size: 14464277
dataset_size: 1761297
- config_name: bydate_comp.os.ms-windows.misc
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1770988
num_examples: 591
- name: test
num_bytes: 706676
num_examples: 394
download_size: 14464277
dataset_size: 2477664
- config_name: bydate_comp.sys.ibm.pc.hardware
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 800446
num_examples: 590
- name: test
num_bytes: 485310
num_examples: 392
download_size: 14464277
dataset_size: 1285756
- config_name: bydate_comp.sys.mac.hardware
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 696311
num_examples: 578
- name: test
num_bytes: 468791
num_examples: 385
download_size: 14464277
dataset_size: 1165102
- config_name: bydate_comp.windows.x
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1243463
num_examples: 593
- name: test
num_bytes: 795366
num_examples: 395
download_size: 14464277
dataset_size: 2038829
- config_name: bydate_misc.forsale
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 611210
num_examples: 585
- name: test
num_bytes: 415902
num_examples: 390
download_size: 14464277
dataset_size: 1027112
- config_name: bydate_rec.autos
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 860646
num_examples: 594
- name: test
num_bytes: 535378
num_examples: 396
download_size: 14464277
dataset_size: 1396024
- config_name: bydate_rec.motorcycles
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 811151
num_examples: 598
- name: test
num_bytes: 497735
num_examples: 398
download_size: 14464277
dataset_size: 1308886
- config_name: bydate_rec.sport.baseball
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 850740
num_examples: 597
- name: test
num_bytes: 618609
num_examples: 397
download_size: 14464277
dataset_size: 1469349
- config_name: bydate_rec.sport.hockey
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1189652
num_examples: 600
- name: test
num_bytes: 666358
num_examples: 399
download_size: 14464277
dataset_size: 1856010
- config_name: bydate_sci.crypt
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1502448
num_examples: 595
- name: test
num_bytes: 657727
num_examples: 396
download_size: 14464277
dataset_size: 2160175
- config_name: bydate_sci.electronics
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 814856
num_examples: 591
- name: test
num_bytes: 523095
num_examples: 393
download_size: 14464277
dataset_size: 1337951
- config_name: bydate_sci.med
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1195201
num_examples: 594
- name: test
num_bytes: 791826
num_examples: 396
download_size: 14464277
dataset_size: 1987027
- config_name: bydate_sci.space
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1197965
num_examples: 593
- name: test
num_bytes: 721771
num_examples: 394
download_size: 14464277
dataset_size: 1919736
- config_name: bydate_soc.religion.christian
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1358047
num_examples: 599
- name: test
num_bytes: 1003668
num_examples: 398
download_size: 14464277
dataset_size: 2361715
- config_name: bydate_talk.politics.guns
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1313019
num_examples: 546
- name: test
num_bytes: 701477
num_examples: 364
download_size: 14464277
dataset_size: 2014496
- config_name: bydate_talk.politics.mideast
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1765833
num_examples: 564
- name: test
num_bytes: 1236435
num_examples: 376
download_size: 14464277
dataset_size: 3002268
- config_name: bydate_talk.politics.misc
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1328057
num_examples: 465
- name: test
num_bytes: 853395
num_examples: 310
download_size: 14464277
dataset_size: 2181452
- config_name: bydate_talk.religion.misc
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 835761
num_examples: 377
- name: test
num_bytes: 598452
num_examples: 251
download_size: 14464277
dataset_size: 1434213
---
# Dataset Card for "newsgroup"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://qwone.com/~jason/20Newsgroups/](http://qwone.com/~jason/20Newsgroups/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [NewsWeeder: Learning to Filter Netnews](https://doi.org/10.1016/B978-1-55860-377-6.50048-7)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 929.27 MB
- **Size of the generated dataset:** 124.41 MB
- **Total amount of disk used:** 1.05 GB
### Dataset Summary
The 20 Newsgroups data set is a collection of approximately 20,000 newsgroup documents, partitioned (nearly) evenly across
20 different newsgroups. To the best of my knowledge, it was originally collected by Ken Lang, probably for his Newsweeder:
Learning to filter netnews paper, though he does not explicitly mention this collection. The 20 newsgroups collection has become
a popular data set for experiments in text applications of machine learning techniques, such as text classification and text clustering.
does not include cross-posts and includes only the "From" and "Subject" headers.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### 18828_alt.atheism
- **Size of downloaded dataset files:** 14.67 MB
- **Size of the generated dataset:** 1.67 MB
- **Total amount of disk used:** 16.34 MB
An example of 'train' looks as follows.
```
```
#### 18828_comp.graphics
- **Size of downloaded dataset files:** 14.67 MB
- **Size of the generated dataset:** 1.66 MB
- **Total amount of disk used:** 16.33 MB
An example of 'train' looks as follows.
```
```
#### 18828_comp.os.ms-windows.misc
- **Size of downloaded dataset files:** 14.67 MB
- **Size of the generated dataset:** 2.38 MB
- **Total amount of disk used:** 17.05 MB
An example of 'train' looks as follows.
```
```
#### 18828_comp.sys.ibm.pc.hardware
- **Size of downloaded dataset files:** 14.67 MB
- **Size of the generated dataset:** 1.18 MB
- **Total amount of disk used:** 15.85 MB
An example of 'train' looks as follows.
```
```
#### 18828_comp.sys.mac.hardware
- **Size of downloaded dataset files:** 14.67 MB
- **Size of the generated dataset:** 1.06 MB
- **Total amount of disk used:** 15.73 MB
An example of 'train' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### 18828_alt.atheism
- `text`: a `string` feature.
#### 18828_comp.graphics
- `text`: a `string` feature.
#### 18828_comp.os.ms-windows.misc
- `text`: a `string` feature.
#### 18828_comp.sys.ibm.pc.hardware
- `text`: a `string` feature.
#### 18828_comp.sys.mac.hardware
- `text`: a `string` feature.
### Data Splits
| name |train|
|------------------------------|----:|
|18828_alt.atheism | 799|
|18828_comp.graphics | 973|
|18828_comp.os.ms-windows.misc | 985|
|18828_comp.sys.ibm.pc.hardware| 982|
|18828_comp.sys.mac.hardware | 961|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@incollection{LANG1995331,
title = {NewsWeeder: Learning to Filter Netnews},
editor = {Armand Prieditis and Stuart Russell},
booktitle = {Machine Learning Proceedings 1995},
publisher = {Morgan Kaufmann},
address = {San Francisco (CA)},
pages = {331-339},
year = {1995},
isbn = {978-1-55860-377-6},
doi = {https://doi.org/10.1016/B978-1-55860-377-6.50048-7},
url = {https://www.sciencedirect.com/science/article/pii/B9781558603776500487},
author = {Ken Lang},
}
```
### Contributions
Thanks to [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
newsph | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- fil
- tl
license:
- gpl-3.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: newsph-nli
pretty_name: NewsPH-NLI
dataset_info:
features:
- name: text
dtype: string
config_name: newsph
splits:
- name: train
num_bytes: 298833914
num_examples: 2190465
download_size: 104086466
dataset_size: 298833914
---
# Dataset Card for NewsPH
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Filipino Text Benchmarks](https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks)
- **Repository:**
- **Paper:** [Investigating the True Performance of Transformers in Low-Resource Languages: A Case Study in Automatic Corpus Creation](https://arxiv.org/abs/2010.11574)
- **Leaderboard:**
- **Point of Contact:** [Jan Christian Blaise Cruz](jan_christian_cruz@dlsu.edu.ph)
### Dataset Summary
Raw collection of news articles in Filipino. Used to produce the NewsPH-NLI dataset in Cruz et al. (2020)
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Tagalog/Filipino
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- `text` (`str`)
The dataset is in plaintext and only has one field ("text"). It can be used for language modeling.
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@jcblaisecruz02](https://github.com/jcblaisecruz02) for adding this dataset. |
newsph_nli | ---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- tl
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- natural-language-inference
paperswithcode_id: newsph-nli
pretty_name: NewsPH NLI
dataset_info:
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 154510599
num_examples: 420000
- name: test
num_bytes: 3283665
num_examples: 9000
- name: validation
num_bytes: 33015530
num_examples: 90000
download_size: 76565287
dataset_size: 190809794
---
# Dataset Card for NewsPH NLI
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [NewsPH NLI homepage](https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks)
- **Repository:** [NewsPH NLI repository](https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks)
- **Paper:** [Arxiv paper](https://arxiv.org/pdf/2010.11574.pdf)
- **Leaderboard:**
- **Point of Contact:** [Jan Christian Cruz](mailto:jan_christian_cruz@dlsu.edu.ph)
### Dataset Summary
First benchmark dataset for sentence entailment in the low-resource Filipino language. Constructed through exploting the structure of news articles. Contains 600,000 premise-hypothesis pairs, in 70-15-15 split for training, validation, and testing.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset contains news articles in Filipino (Tagalog) scraped rom all major Philippine news sites online.
## Dataset Structure
### Data Instances
Sample data:
{
"premise": "Alam ba ninyo ang ginawa ni Erap na noon ay lasing na lasing na rin?",
"hypothesis": "Ininom niya ang alak na pinagpulbusan!",
"label": "0"
}
### Data Fields
[More Information Needed]
### Data Splits
Contains 600,000 premise-hypothesis pairs, in 70-15-15 split for training, validation, and testing.
## Dataset Creation
### Curation Rationale
We propose the use of news articles for automatically creating benchmark datasets for NLI because of two reasons. First, news articles commonly use single-sentence paragraphing, meaning every paragraph in a news article is limited to a single sentence. Second, straight news articles follow the “inverted pyramid” structure, where every succeeding paragraph builds upon the premise of those that came before it, with the most important information on top and the least important towards the end.
### Source Data
#### Initial Data Collection and Normalization
To create the dataset, we scrape news articles from all major Philippine news sites online. We collect a total of 229,571 straight news articles, which we then lightly preprocess to remove extraneous unicode characters and correct minimal misspellings. No further preprocessing is done to preserve information in the data.
#### Who are the source language producers?
The dataset was created by Jan Christian, Blaise Cruz, Jose Kristian Resabal, James Lin, Dan John Velasco, and Charibeth Cheng from De La Salle University and the University of the Philippines
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
Jan Christian Blaise Cruz, Jose Kristian Resabal, James Lin, Dan John Velasco and Charibeth Cheng
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[Jan Christian Blaise Cruz] (mailto:jan_christian_cruz@dlsu.edu.ph)
### Licensing Information
[More Information Needed]
### Citation Information
@article{cruz2020investigating,
title={Investigating the True Performance of Transformers in Low-Resource Languages: A Case Study in Automatic Corpus Creation},
author={Jan Christian Blaise Cruz and Jose Kristian Resabal and James Lin and Dan John Velasco and Charibeth Cheng},
journal={arXiv preprint arXiv:2010.11574},
year={2020}
}
### Contributions
Thanks to [@anaerobeth](https://github.com/anaerobeth) for adding this dataset. |
newspop | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- text-scoring
paperswithcode_id: null
pretty_name: News Popularity in Multiple Social Media Platforms
tags:
- social-media-shares-prediction
dataset_info:
features:
- name: id
dtype: int32
- name: title
dtype: string
- name: headline
dtype: string
- name: source
dtype: string
- name: topic
dtype: string
- name: publish_date
dtype: string
- name: facebook
dtype: int32
- name: google_plus
dtype: int32
- name: linked_in
dtype: int32
splits:
- name: train
num_bytes: 27927641
num_examples: 93239
download_size: 30338277
dataset_size: 27927641
---
# Dataset Card for News Popularity in Multiple Social Media Platforms
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [UCI](https://archive.ics.uci.edu/ml/datasets/News+Popularity+in+Multiple+Social+Media+Platforms)
- **Repository:**
- **Paper:** [Arxiv](https://arxiv.org/abs/1801.07055)
- **Leaderboard:** [Kaggle](https://www.kaggle.com/nikhiljohnk/news-popularity-in-multiple-social-media-platforms/code)
- **Point of Contact:**
### Dataset Summary
Social sharing data across Facebook, Google+ and LinkedIn for 100k news items on the topics of: economy, microsoft, obama and palestine.
### Supported Tasks and Leaderboards
Popularity prediction/shares prediction
### Languages
English
## Dataset Structure
### Data Instances
```
{ "id": 35873,
"title": "Microsoft's 'teen girl' AI turns into a Hitler-loving sex robot within 24 ...",
"headline": "Developers at Microsoft created 'Tay', an AI modelled to speak 'like a teen girl', in order to improve the customer service on their voice",
"source": "Telegraph.co.uk",
"topic": "microsoft",
"publish_date": "2016-03-24 09:53:54",
"facebook": 22346,
"google_plus": 973,
"linked_in": 1009
}
```
### Data Fields
- id: the sentence id in the source dataset
- title: the title of the link as shared on social media
- headline: the headline, or sometimes the lede of the story
- source: the source news site
- topic: the topic: one of "economy", "microsoft", "obama" and "palestine"
- publish_date: the date the original article was published
- facebook: the number of Facebook shares, or -1 if this data wasn't collected
- google_plus: the number of Google+ likes, or -1 if this data wasn't collected
- linked_in: the number of LinkedIn shares, or -1 if if this data wasn't collected
### Data Splits
None
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
The source headlines were by journalists, while the titles were written by the
people sharing it on social media.
### Annotations
#### Annotation process
The 'annotations' are simply the number of shares, or likes in the case of
Google+ as collected from various API endpoints.
#### Who are the annotators?
Social media users.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
License: Creative Commons Attribution 4.0 International License (CC-BY)
### Citation Information
```
@article{Moniz2018MultiSourceSF,
title={Multi-Source Social Feedback of Online News Feeds},
author={N. Moniz and L. Torgo},
journal={ArXiv},
year={2018},
volume={abs/1801.07055}
}
```
### Contributions
Thanks to [@frankier](https://github.com/frankier) for adding this dataset. |
newsqa | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: newsqa
pretty_name: NewsQA
configs:
- combined-csv
- combined-json
- split
dataset_info:
- config_name: combined-csv
features:
- name: story_id
dtype: string
- name: story_text
dtype: string
- name: question
dtype: string
- name: answer_char_ranges
dtype: string
splits:
- name: train
num_bytes: 465942194
num_examples: 119633
download_size: 0
dataset_size: 465942194
- config_name: combined-json
features:
- name: storyId
dtype: string
- name: text
dtype: string
- name: type
dtype: string
- name: questions
sequence:
- name: q
dtype: string
- name: isAnswerAbsent
dtype: int32
- name: isQuestionBad
dtype: int32
- name: consensus
struct:
- name: s
dtype: int32
- name: e
dtype: int32
- name: badQuestion
dtype: bool
- name: noAnswer
dtype: bool
- name: answers
sequence:
- name: sourcerAnswers
sequence:
- name: s
dtype: int32
- name: e
dtype: int32
- name: badQuestion
dtype: bool
- name: noAnswer
dtype: bool
- name: validated_answers
sequence:
- name: s
dtype: int32
- name: e
dtype: int32
- name: badQuestion
dtype: bool
- name: noAnswer
dtype: bool
- name: count
dtype: int32
splits:
- name: train
num_bytes: 68667276
num_examples: 12744
download_size: 0
dataset_size: 68667276
- config_name: split
features:
- name: story_id
dtype: string
- name: story_text
dtype: string
- name: question
dtype: string
- name: answer_token_ranges
dtype: string
splits:
- name: train
num_bytes: 362031288
num_examples: 92549
- name: test
num_bytes: 19763673
num_examples: 5126
- name: validation
num_bytes: 19862778
num_examples: 5166
download_size: 0
dataset_size: 401657739
---
# Dataset Card for NewsQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.microsoft.com/en-us/research/project/newsqa-dataset/
- **Repository:** https://github.com/Maluuba/newsqa
- **Paper:** https://www.aclweb.org/anthology/W17-2623/
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
NewsQA is a challenging machine comprehension dataset of over 100,000 human-generated question-answer pairs.
Crowdworkers supply questions and answers based on a set of over 10,000 news articles from CNN, with answers consisting of spans of text from the corresponding articles.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English
## Dataset Structure
### Data Instances
```
{'storyId': './cnn/stories/42d01e187213e86f5fe617fe32e716ff7fa3afc4.story',
'text': 'NEW DELHI, India (CNN) -- A high court in northern India on Friday acquitted a wealthy businessman facing the death sentence for the killing of a teen in a case dubbed "the house of horrors."\n\n\n\nMoninder Singh Pandher was sentenced to death by a lower court in February.\n\n\n\nThe teen was one of 19 victims -- children and young women -- in one of the most gruesome serial killings in India in recent years.\n\n\n\nThe Allahabad high court has acquitted Moninder Singh Pandher, his lawyer Sikandar B. Kochar told CNN.\n\n\n\nPandher and his domestic employee Surinder Koli were sentenced to death in February by a lower court for the rape and murder of the 14-year-old.\n\n\n\nThe high court upheld Koli\'s death sentence, Kochar said.\n\n\n\nThe two were arrested two years ago after body parts packed in plastic bags were found near their home in Noida, a New Delhi suburb. Their home was later dubbed a "house of horrors" by the Indian media.\n\n\n\nPandher was not named a main suspect by investigators initially, but was summoned as co-accused during the trial, Kochar said.\n\n\n\nKochar said his client was in Australia when the teen was raped and killed.\n\n\n\nPandher faces trial in the remaining 18 killings and could remain in custody, the attorney said.',
'type': 'train',
'questions': {'q': ['What was the amount of children murdered?',
'When was Pandher sentenced to death?',
'The court aquitted Moninder Singh Pandher of what crime?',
'who was acquitted',
'who was sentenced',
'What was Moninder Singh Pandher acquitted for?',
'Who was sentenced to death in February?',
'how many people died',
'How many children and young women were murdered?'],
'isAnswerAbsent': [0, 0, 0, 0, 0, 0, 0, 0, 0],
'isQuestionBad': [0, 0, 0, 0, 0, 0, 0, 0, 0],
'consensus': [{'s': 294, 'e': 297, 'badQuestion': False, 'noAnswer': False},
{'s': 261, 'e': 271, 'badQuestion': False, 'noAnswer': False},
{'s': 624, 'e': 640, 'badQuestion': False, 'noAnswer': False},
{'s': 195, 'e': 218, 'badQuestion': False, 'noAnswer': False},
{'s': 195, 'e': 218, 'badQuestion': False, 'noAnswer': False},
{'s': 129, 'e': 151, 'badQuestion': False, 'noAnswer': False},
{'s': 195, 'e': 218, 'badQuestion': False, 'noAnswer': False},
{'s': 294, 'e': 297, 'badQuestion': False, 'noAnswer': False},
{'s': 294, 'e': 297, 'badQuestion': False, 'noAnswer': False}],
'answers': [{'sourcerAnswers': [{'s': [294],
'e': [297],
'badQuestion': [False],
'noAnswer': [False]},
{'s': [0], 'e': [0], 'badQuestion': [False], 'noAnswer': [True]},
{'s': [0], 'e': [0], 'badQuestion': [False], 'noAnswer': [True]}]},
{'sourcerAnswers': [{'s': [261],
'e': [271],
'badQuestion': [False],
'noAnswer': [False]},
{'s': [258], 'e': [271], 'badQuestion': [False], 'noAnswer': [False]},
{'s': [261], 'e': [271], 'badQuestion': [False], 'noAnswer': [False]}]},
{'sourcerAnswers': [{'s': [26],
'e': [33],
'badQuestion': [False],
'noAnswer': [False]},
{'s': [0], 'e': [0], 'badQuestion': [False], 'noAnswer': [True]},
{'s': [624], 'e': [640], 'badQuestion': [False], 'noAnswer': [False]}]},
{'sourcerAnswers': [{'s': [195],
'e': [218],
'badQuestion': [False],
'noAnswer': [False]},
{'s': [195], 'e': [218], 'badQuestion': [False], 'noAnswer': [False]}]},
{'sourcerAnswers': [{'s': [0],
'e': [0],
'badQuestion': [False],
'noAnswer': [True]},
{'s': [195, 232],
'e': [218, 271],
'badQuestion': [False, False],
'noAnswer': [False, False]},
{'s': [0], 'e': [0], 'badQuestion': [False], 'noAnswer': [True]}]},
{'sourcerAnswers': [{'s': [129],
'e': [192],
'badQuestion': [False],
'noAnswer': [False]},
{'s': [129], 'e': [151], 'badQuestion': [False], 'noAnswer': [False]},
{'s': [133], 'e': [151], 'badQuestion': [False], 'noAnswer': [False]}]},
{'sourcerAnswers': [{'s': [195],
'e': [218],
'badQuestion': [False],
'noAnswer': [False]},
{'s': [195], 'e': [218], 'badQuestion': [False], 'noAnswer': [False]}]},
{'sourcerAnswers': [{'s': [294],
'e': [297],
'badQuestion': [False],
'noAnswer': [False]},
{'s': [294], 'e': [297], 'badQuestion': [False], 'noAnswer': [False]}]},
{'sourcerAnswers': [{'s': [294],
'e': [297],
'badQuestion': [False],
'noAnswer': [False]},
{'s': [294], 'e': [297], 'badQuestion': [False], 'noAnswer': [False]}]}],
'validated_answers': [{'s': [0, 294],
'e': [0, 297],
'badQuestion': [False, False],
'noAnswer': [True, False],
'count': [1, 2]},
{'s': [], 'e': [], 'badQuestion': [], 'noAnswer': [], 'count': []},
{'s': [624],
'e': [640],
'badQuestion': [False],
'noAnswer': [False],
'count': [2]},
{'s': [], 'e': [], 'badQuestion': [], 'noAnswer': [], 'count': []},
{'s': [195],
'e': [218],
'badQuestion': [False],
'noAnswer': [False],
'count': [2]},
{'s': [129],
'e': [151],
'badQuestion': [False],
'noAnswer': [False],
'count': [2]},
{'s': [], 'e': [], 'badQuestion': [], 'noAnswer': [], 'count': []},
{'s': [], 'e': [], 'badQuestion': [], 'noAnswer': [], 'count': []},
{'s': [], 'e': [], 'badQuestion': [], 'noAnswer': [], 'count': []}]}}
```
### Data Fields
Configuration: combined-csv
- 'story_id': An identifier of the story.
- 'story_text': Text of the story.
- 'question': A question about the story.
- 'answer_char_ranges': The raw data collected for character based indices to answers in story_text. E.g. 196:228|196:202,217:228|None. Answers from different crowdsourcers are separated by `|`; within those, multiple selections from the same crowdsourcer are separated by `,`. `None` means the crowdsourcer thought there was no answer to the question in the story. The start is inclusive and the end is exclusive. The end may point to whitespace after a token.
Configuration: combined-json
- 'storyId': An identifier of the story.
- 'text': Text of the story.
- 'type': Split type. Will be "train", "validation" or "test".
- 'questions': A list containing the following:
- 'q': A question about the story.
- 'isAnswerAbsent': Proportion of crowdsourcers that said there was no answer to the question in the story.
- 'isQuestionBad': Proportion of crowdsourcers that said the question does not make sense.
- 'consensus': The consensus answer. Use this field to pick the best continuous answer span from the text. If you want to know about a question having multiple answers in the text then you can use the more detailed "answers" and "validated_answers". The object can have start and end positions like in the example above or can be {"badQuestion": true} or {"noAnswer": true}. Note that there is only one consensus answer since it's based on the majority agreement of the crowdsourcers.
- 's': Start of the answer. The first character of the answer in "text" (inclusive).
- 'e': End of the answer. The last character of the answer in "text" (exclusive).
- 'badQuestion': The validator said that the question did not make sense.
- 'noAnswer': The crowdsourcer said that there was no answer to the question in the text.
- 'answers': The answers from various crowdsourcers.
- 'sourcerAnswers': The answer provided from one crowdsourcer.
- 's': Start of the answer. The first character of the answer in "text" (inclusive).
- 'e': End of the answer. The last character of the answer in "text" (exclusive).
- 'badQuestion': The crowdsourcer said that the question did not make sense.
- 'noAnswer': The crowdsourcer said that there was no answer to the question in the text.
- 'validated_answers': The answers from the validators.
- 's': Start of the answer. The first character of the answer in "text" (inclusive).
- 'e': End of the answer. The last character of the answer in "text" (exclusive).
- 'badQuestion': The validator said that the question did not make sense.
- 'noAnswer': The validator said that there was no answer to the question in the text.
- 'count': The number of validators that agreed with this answer.
Configuration: split
- 'story_id': An identifier of the story
- 'story_text': text of the story
- 'question': A question about the story.
- 'answer_token_ranges': Word based indices to answers in story_text. E.g. 196:202,217:228. Multiple selections from the same answer are separated by `,`. The start is inclusive and the end is exclusive. The end may point to whitespace after a token.
### Data Splits
| name | train | validation | test |
|---------------|-----------:|-----------:|--------:|
| combined-csv | 119633 | | |
| combined-json | 12744 | | |
| split | 92549 | 5166 | 5126 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
NewsQA Code
Copyright (c) Microsoft Corporation
All rights reserved.
MIT License
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
© 2020 GitHub, Inc.
### Citation Information
@inproceedings{trischler2017newsqa,
title={NewsQA: A Machine Comprehension Dataset},
author={Trischler, Adam and Wang, Tong and Yuan, Xingdi and Harris, Justin and Sordoni, Alessandro and Bachman, Philip and Suleman, Kaheer},
booktitle={Proceedings of the 2nd Workshop on Representation Learning for NLP},
pages={191--200},
year={2017}
### Contributions
Thanks to [@rsanjaykamath](https://github.com/rsanjaykamath) for adding this dataset. |
newsroom | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: CORNELL NEWSROOM
size_categories:
- unknown
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
paperswithcode_id: newsroom
dataset_info:
features:
- name: text
dtype: string
- name: summary
dtype: string
- name: title
dtype: string
- name: url
dtype: string
- name: date
dtype: string
- name: density_bin
dtype: string
- name: coverage_bin
dtype: string
- name: compression_bin
dtype: string
- name: density
dtype: float32
- name: coverage
dtype: float32
- name: compression
dtype: float32
splits:
- name: test
num_bytes: 472446866
num_examples: 108862
- name: train
num_bytes: 4357506078
num_examples: 995041
- name: validation
num_bytes: 473206951
num_examples: 108837
download_size: 0
dataset_size: 5303159895
---
# Dataset Card for "newsroom"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://lil.nlp.cornell.edu/newsroom/index.html](https://lil.nlp.cornell.edu/newsroom/index.html)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 5.30 GB
- **Total amount of disk used:** 5.30 GB
### Dataset Summary
NEWSROOM is a large dataset for training and evaluating summarization systems.
It contains 1.3 million articles and summaries written by authors and
editors in the newsrooms of 38 major publications.
Dataset features includes:
- text: Input news text.
- summary: Summary for the news.
And additional features:
- title: news title.
- url: url of the news.
- date: date of the article.
- density: extractive density.
- coverage: extractive coverage.
- compression: compression ratio.
- density_bin: low, medium, high.
- coverage_bin: extractive, abstractive.
- compression_bin: low, medium, high.
This dataset can be downloaded upon requests. Unzip all the contents
"train.jsonl, dev.josnl, test.jsonl" to the `tfds` folder.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
English (`en`).
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 5.30 GB
- **Total amount of disk used:** 5.30 GB
An example of 'train' looks as follows.
```
{
"compression": 33.880001068115234,
"compression_bin": "medium",
"coverage": 1.0,
"coverage_bin": "high",
"date": "200600000",
"density": 11.720000267028809,
"density_bin": "extractive",
"summary": "some summary 1",
"text": "some text 1",
"title": "news title 1",
"url": "url.html"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `text`: a `string` feature.
- `summary`: a `string` feature.
- `title`: a `string` feature.
- `url`: a `string` feature.
- `date`: a `string` feature.
- `density_bin`: a `string` feature.
- `coverage_bin`: a `string` feature.
- `compression_bin`: a `string` feature.
- `density`: a `float32` feature.
- `coverage`: a `float32` feature.
- `compression`: a `float32` feature.
### Data Splits
| name |train |validation| test |
|-------|-----:|---------:|-----:|
|default|995041| 108837|108862|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
https://cornell.qualtrics.com/jfe/form/SV_6YA3HQ2p75XH4IR
This Dataset Usage Agreement ("Agreement") is a legal agreement with the Cornell Newsroom Summaries Team ("Newsroom") for the Dataset made available to the individual or entity ("Researcher") exercising rights under this Agreement. "Dataset" includes all text, data, information, source code, and any related materials, documentation, files, media, updates or revisions.
The Dataset is intended for non-commercial research and educational purposes only, and is made available free of charge without extending any license or other intellectual property rights. By downloading or using the Dataset, the Researcher acknowledges that they agree to the terms in this Agreement, and represent and warrant that they have authority to do so on behalf of any entity exercising rights under this Agreement. The Researcher accepts and agrees to be bound by the terms and conditions of this Agreement. If the Researcher does not agree to this Agreement, they may not download or use the Dataset.
By sharing content with Newsroom, such as by submitting content to this site or by corresponding with Newsroom contributors, the Researcher grants Newsroom the right to use, reproduce, display, perform, adapt, modify, distribute, have distributed, and promote the content in any form, anywhere and for any purpose, such as for evaluating and comparing summarization systems. Nothing in this Agreement shall obligate Newsroom to provide any support for the Dataset. Any feedback, suggestions, ideas, comments, improvements given by the Researcher related to the Dataset is voluntarily given, and may be used by Newsroom without obligation or restriction of any kind.
The Researcher accepts full responsibility for their use of the Dataset and shall defend indemnify, and hold harmless Newsroom, including their employees, trustees, officers, and agents, against any and all claims arising from the Researcher's use of the Dataset. The Researcher agrees to comply with all laws and regulations as they relate to access to and use of the Dataset and Service including U.S. export jurisdiction and other U.S. and international regulations.
THE DATASET IS PROVIDED "AS IS." NEWSROOM DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT. WITHOUT LIMITATION OF THE ABOVE, NEWSROOM DISCLAIMS ANY WARRANTY THAT DATASET IS BUG OR ERROR-FREE, AND GRANTS NO WARRANTY REGARDING ITS USE OR THE RESULTS THEREFROM INCLUDING, WITHOUT LIMITATION, ITS CORRECTNESS, ACCURACY, OR RELIABILITY. THE DATASET IS NOT WARRANTIED TO FULFILL ANY PARTICULAR PURPOSES OR NEEDS.
TO THE EXTENT NOT PROHIBITED BY LAW, IN NO EVENT SHALL NEWSROOM BE LIABLE FOR ANY LOSS, DAMAGE OR INJURY, DIRECT AND INDIRECT, INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES, HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER FOR BREACH OF CONTRACT, TORT (INCLUDING NEGLIGENCE) OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, INCLUDING BUT NOT LIMITED TO LOSS OF PROFITS, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. THESE LIMITATIONS SHALL APPLY NOTWITHSTANDING ANY FAILURE OF ESSENTIAL PURPOSE OF ANY LIMITED REMEDY.
This Agreement is effective until terminated. Newsroom reserves the right to terminate the Researcher's access to the Dataset at any time. If the Researcher breaches this Agreement, the Researcher's rights to use the Dataset shall terminate automatically. The Researcher will immediately cease all use and distribution of the Dataset and destroy any copies or portions of the Dataset in their possession.
This Agreement is governed by the laws of the State of New York, without regard to conflict of law principles. All terms and provisions of this Agreement shall, if possible, be construed in a manner which makes them valid, but in the event any term or provision of this Agreement is found by a court of competent jurisdiction to be illegal or unenforceable, the validity or enforceability of the remainder of this Agreement shall not be affected.
This Agreement is the complete and exclusive agreement between the parties with respect to its subject matter and supersedes all prior or contemporaneous oral or written agreements or understandings relating to the subject matter.
### Citation Information
```
@inproceedings{N18-1065,
author = {Grusky, Max and Naaman, Mor and Artzi, Yoav},
title = {NEWSROOM: A Dataset of 1.3 Million Summaries
with Diverse Extractive Strategies},
booktitle = {Proceedings of the 2018 Conference of the
North American Chapter of the Association for
Computational Linguistics: Human Language Technologies},
year = {2018},
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@yoavartzi](https://github.com/yoavartzi), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
nkjp-ner | ---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- pl
license:
- gpl-3.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: NJKP NER
dataset_info:
features:
- name: sentence
dtype: string
- name: target
dtype:
class_label:
names:
'0': geogName
'1': noEntity
'2': orgName
'3': persName
'4': placeName
'5': time
splits:
- name: train
num_bytes: 1612125
num_examples: 15794
- name: test
num_bytes: 221092
num_examples: 2058
- name: validation
num_bytes: 196652
num_examples: 1941
download_size: 821629
dataset_size: 2029869
---
# Dataset Card for NJKP NER
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
http://nkjp.pl/index.php?page=0&lang=1
- **Repository:**
- **Paper:**
@book{przepiorkowski2012narodowy,
title={Narodowy korpus j{\k{e}}zyka polskiego},
author={Przepi{\'o}rkowski, Adam},
year={2012},
publisher={Naukowe PWN}
- **Leaderboard:**
- **Point of Contact:**
adamp@ipipan.waw.pl
### Dataset Summary
A linguistic corpus is a collection of texts where one can find the typical use of a single word or a phrase, as well as their meaning and grammatical function. Nowadays, without access to a language corpus, it has become impossible to do linguistic research, to write dictionaries, grammars and language teaching books, to create search engines sensitive to Polish inflection, machine translation engines and software of advanced language technology. Language corpora have become an essential tool for linguists, but they are also helpful for software engineers, scholars of literature and culture, historians, librarians and other specialists of art and computer sciences.
The manually annotated 1-million word subcorpus of the NJKP, available on GNU GPL v.3
### Supported Tasks and Leaderboards
Named entity recognition
[More Information Needed]
### Languages
Polish
## Dataset Structure
### Data Instances
Two tsv files (train, dev) with two columns (sentence, target) and one (test) with just one (sentence).
### Data Fields
- sentence
- target
### Data Splits
Data is splitted in train/dev/test split.
## Dataset Creation
### Curation Rationale
This dataset is one of nine evaluation tasks to improve polish language processing.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
GNU GPL v.3
### Citation Information
@book{przepiorkowski2012narodowy,
title={Narodowy korpus j{\k{e}}zyka polskiego},
author={Przepi{\'o}rkowski, Adam},
year={2012},
publisher={Naukowe PWN}
}
### Contributions
Thanks to [@abecadel](https://github.com/abecadel) for adding this dataset. |
nli_tr | ---
annotations_creators:
- expert-generated
language_creators:
- machine-generated
language:
- tr
license:
- cc-by-3.0
- cc-by-4.0
- cc-by-sa-3.0
- mit
- other
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- extended|snli
- extended|multi_nli
task_categories:
- text-classification
task_ids:
- natural-language-inference
- semantic-similarity-scoring
- text-scoring
paperswithcode_id: nli-tr
pretty_name: Natural Language Inference in Turkish
configs:
- multinli_tr
- snli_tr
license_details: Open Portion of the American National Corpus
dataset_info:
- config_name: snli_tr
features:
- name: idx
dtype: int32
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 71175743
num_examples: 550152
- name: validation
num_bytes: 1359639
num_examples: 10000
- name: test
num_bytes: 1355409
num_examples: 10000
download_size: 40328942
dataset_size: 73890791
- config_name: multinli_tr
features:
- name: idx
dtype: int32
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 75524150
num_examples: 392702
- name: validation_matched
num_bytes: 1908283
num_examples: 10000
- name: validation_mismatched
num_bytes: 2039392
num_examples: 10000
download_size: 75518512
dataset_size: 79471825
---
# Dataset Card for "nli_tr"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/boun-tabi/NLI-TR](https://github.com/boun-tabi/NLI-TR)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 115.85 MB
- **Size of the generated dataset:** 153.36 MB
- **Total amount of disk used:** 269.21 MB
### Dataset Summary
The Natural Language Inference in Turkish (NLI-TR) is a set of two large scale datasets that were obtained by translating the foundational NLI corpora (SNLI and MNLI) using Amazon Translate.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### multinli_tr
- **Size of downloaded dataset files:** 75.52 MB
- **Size of the generated dataset:** 79.47 MB
- **Total amount of disk used:** 154.99 MB
An example of 'validation_matched' looks as follows.
```
This example was too long and was cropped:
{
"hypothesis": "Mrinal Sen'in çalışmalarının çoğu Avrupa koleksiyonlarında bulunabilir.",
"idx": 7,
"label": 1,
"premise": "\"Kalküta, sanatsal yaratıcılığa dair herhangi bir iddiaya sahip olan tek diğer üretim merkezi gibi görünüyor, ama ironik bir şek..."
}
```
#### snli_tr
- **Size of downloaded dataset files:** 40.33 MB
- **Size of the generated dataset:** 73.89 MB
- **Total amount of disk used:** 114.22 MB
An example of 'train' looks as follows.
```
{
"hypothesis": "Yaşlı bir adam, kızının işten çıkmasını bekçiyken suyunu içer.",
"idx": 9,
"label": 1,
"premise": "Parlak renkli gömlek çalışanları arka planda gülümseme iken yaşlı bir adam bir kahve dükkanında küçük bir masada onun portakal suyu ile oturur."
}
```
### Data Fields
The data fields are the same among all splits.
#### multinli_tr
- `idx`: a `int32` feature.
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
#### snli_tr
- `idx`: a `int32` feature.
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
### Data Splits
#### multinli_tr
| |train |validation_matched|validation_mismatched|
|-----------|-----:|-----------------:|--------------------:|
|multinli_tr|392702| 10000| 10000|
#### snli_tr
| |train |validation|test |
|-------|-----:|---------:|----:|
|snli_tr|550152| 10000|10000|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{budur-etal-2020-data,
title = "Data and Representation for Turkish Natural Language Inference",
author = "Budur, Emrah and
"{O}zçelik, Rıza and
G"{u}ng"{o}r, Tunga",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
abstract = "Large annotated datasets in NLP are overwhelmingly in English. This is an obstacle to progress in other languages. Unfortunately, obtaining new annotated resources for each task in each language would be prohibitively expensive. At the same time, commercial machine translation systems are now robust. Can we leverage these systems to translate English-language datasets automatically? In this paper, we offer a positive response for natural language inference (NLI) in Turkish. We translated two large English NLI datasets into Turkish and had a team of experts validate their translation quality and fidelity to the original labels. Using these datasets, we address core issues of representation for Turkish NLI. We find that in-language embeddings are essential and that morphological parsing can be avoided where the training set is large. Finally, we show that models trained on our machine-translated datasets are successful on human-translated evaluation sets. We share all code, models, and data publicly.",
}
```
### Contributions
Thanks to [@e-budur](https://github.com/e-budur) for adding this dataset. |
nlu_evaluation_data | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- intent-classification
- multi-class-classification
pretty_name: NLU Evaluation Data
dataset_info:
features:
- name: text
dtype: string
- name: scenario
dtype: string
- name: label
dtype:
class_label:
names:
'0': alarm_query
'1': alarm_remove
'2': alarm_set
'3': audio_volume_down
'4': audio_volume_mute
'5': audio_volume_other
'6': audio_volume_up
'7': calendar_query
'8': calendar_remove
'9': calendar_set
'10': cooking_query
'11': cooking_recipe
'12': datetime_convert
'13': datetime_query
'14': email_addcontact
'15': email_query
'16': email_querycontact
'17': email_sendemail
'18': general_affirm
'19': general_commandstop
'20': general_confirm
'21': general_dontcare
'22': general_explain
'23': general_greet
'24': general_joke
'25': general_negate
'26': general_praise
'27': general_quirky
'28': general_repeat
'29': iot_cleaning
'30': iot_coffee
'31': iot_hue_lightchange
'32': iot_hue_lightdim
'33': iot_hue_lightoff
'34': iot_hue_lighton
'35': iot_hue_lightup
'36': iot_wemo_off
'37': iot_wemo_on
'38': lists_createoradd
'39': lists_query
'40': lists_remove
'41': music_dislikeness
'42': music_likeness
'43': music_query
'44': music_settings
'45': news_query
'46': play_audiobook
'47': play_game
'48': play_music
'49': play_podcasts
'50': play_radio
'51': qa_currency
'52': qa_definition
'53': qa_factoid
'54': qa_maths
'55': qa_stock
'56': recommendation_events
'57': recommendation_locations
'58': recommendation_movies
'59': social_post
'60': social_query
'61': takeaway_order
'62': takeaway_query
'63': transport_query
'64': transport_taxi
'65': transport_ticket
'66': transport_traffic
'67': weather_query
splits:
- name: train
num_bytes: 1447941
num_examples: 25715
download_size: 5867439
dataset_size: 1447941
---
# Dataset Card for NLU Evaluation Data
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/xliuhw/NLU-Evaluation-Data)
- **Repository:** [Github](https://github.com/xliuhw/NLU-Evaluation-Data)
- **Paper:** [ArXiv](https://arxiv.org/abs/1903.05566)
- **Leaderboard:**
- **Point of Contact:** [x.liu@hw.ac.uk](mailto:x.liu@hw.ac.uk)
### Dataset Summary
Dataset with short utterances from conversational domain annotated with their corresponding intents and scenarios.
It has 25 715 non-zero examples (original dataset has 25716 examples) belonging to 18 scenarios and 68 intents.
Originally, the dataset was crowd-sourced and annotated with both intents and named entities
in order to evaluate commercial NLU systems such as RASA, IBM's Watson, Microsoft's LUIS and Google's Dialogflow.
**This version of the dataset only includes intent annotations!**
In contrast to paper claims, released data contains 68 unique intents. This is due to the fact, that NLU systems were
evaluated on more curated part of this dataset which only included 64 most important intents. Read more in [github issue](https://github.com/xliuhw/NLU-Evaluation-Data/issues/5).
### Supported Tasks and Leaderboards
Intent classification, intent detection
### Languages
English
## Dataset Structure
### Data Instances
An example of 'train' looks as follows:
```
{
'label': 2, # integer label corresponding to "alarm_set" intent
'scenario': 'alarm',
'text': 'wake me up at five am this week'
}
```
### Data Fields
- `text`: a string feature.
- `label`: one of classification labels (0-67) corresponding to unique intents.
- `scenario`: a string with one of unique scenarios (18).
Intent names are mapped to `label` in the following way:
| label | intent |
|--------:|:-------------------------|
| 0 | alarm_query |
| 1 | alarm_remove |
| 2 | alarm_set |
| 3 | audio_volume_down |
| 4 | audio_volume_mute |
| 5 | audio_volume_other |
| 6 | audio_volume_up |
| 7 | calendar_query |
| 8 | calendar_remove |
| 9 | calendar_set |
| 10 | cooking_query |
| 11 | cooking_recipe |
| 12 | datetime_convert |
| 13 | datetime_query |
| 14 | email_addcontact |
| 15 | email_query |
| 16 | email_querycontact |
| 17 | email_sendemail |
| 18 | general_affirm |
| 19 | general_commandstop |
| 20 | general_confirm |
| 21 | general_dontcare |
| 22 | general_explain |
| 23 | general_greet |
| 24 | general_joke |
| 25 | general_negate |
| 26 | general_praise |
| 27 | general_quirky |
| 28 | general_repeat |
| 29 | iot_cleaning |
| 30 | iot_coffee |
| 31 | iot_hue_lightchange |
| 32 | iot_hue_lightdim |
| 33 | iot_hue_lightoff |
| 34 | iot_hue_lighton |
| 35 | iot_hue_lightup |
| 36 | iot_wemo_off |
| 37 | iot_wemo_on |
| 38 | lists_createoradd |
| 39 | lists_query |
| 40 | lists_remove |
| 41 | music_dislikeness |
| 42 | music_likeness |
| 43 | music_query |
| 44 | music_settings |
| 45 | news_query |
| 46 | play_audiobook |
| 47 | play_game |
| 48 | play_music |
| 49 | play_podcasts |
| 50 | play_radio |
| 51 | qa_currency |
| 52 | qa_definition |
| 53 | qa_factoid |
| 54 | qa_maths |
| 55 | qa_stock |
| 56 | recommendation_events |
| 57 | recommendation_locations |
| 58 | recommendation_movies |
| 59 | social_post |
| 60 | social_query |
| 61 | takeaway_order |
| 62 | takeaway_query |
| 63 | transport_query |
| 64 | transport_taxi |
| 65 | transport_ticket |
| 66 | transport_traffic |
| 67 | weather_query |
### Data Splits
| Dataset statistics | Train |
| --- | --- |
| Number of examples | 25 715 |
| Average character length | 34.32 |
| Number of intents | 68 |
| Number of scenarios | 18 |
## Dataset Creation
### Curation Rationale
The dataset was prepared for a wide coverage evaluation and comparison of some of the most popular NLU services.
At that time, previous benchmarks were done with few intents and spawning limited number of domains. Here, the dataset
is much larger and contains 68 intents from 18 scenarios, which is much larger that any previous evaluation. For more discussion see the paper.
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
> To build the NLU component we collected real user data via Amazon Mechanical Turk (AMT). We designed tasks where the Turker’s goal was to answer questions about how people would interact with the home robot, in a wide range of scenarios designed in advance, namely: alarm, audio, audiobook, calendar, cooking, datetime, email, game, general, IoT, lists, music, news, podcasts, general Q&A, radio, recommendations, social, food takeaway, transport, and weather.
The questions put to Turkers were designed to capture the different requests within each given scenario.
In the ‘calendar’ scenario, for example, these pre-designed intents were included: ‘set event’, ‘delete event’ and ‘query event’.
An example question for intent ‘set event’ is: “How would you ask your PDA to schedule a meeting with someone?” for which a user’s answer example was “Schedule a chat with Adam on Thursday afternoon”.
The Turkers would then type in their answers to these questions and select possible entities from the pre-designed suggested entities list for each of their answers.The Turkers didn’t always follow the instructions fully, e.g. for the specified ‘delete event’ Intent, an answer was: “PDA what is my next event?”; which clearly belongs to ‘query event’ Intent.
We have manually corrected all such errors either during post-processing or the subsequent annotations.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset it to help develop better intent detection systems.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Creative Commons Attribution 4.0 International License (CC BY 4.0)
### Citation Information
```
@InProceedings{XLiu.etal:IWSDS2019,
author = {Xingkun Liu, Arash Eshghi, Pawel Swietojanski and Verena Rieser},
title = {Benchmarking Natural Language Understanding Services for building Conversational Agents},
booktitle = {Proceedings of the Tenth International Workshop on Spoken Dialogue Systems Technology (IWSDS)},
month = {April},
year = {2019},
address = {Ortigia, Siracusa (SR), Italy},
publisher = {Springer},
pages = {xxx--xxx},
url = {http://www.xx.xx/xx/}
}
```
### Contributions
Thanks to [@dkajtoch](https://github.com/dkajtoch) for adding this dataset. |
norec | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- nb
- nn
- 'no'
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: norec
pretty_name: NoReC
dataset_info:
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
- name: xpos_tags
sequence: string
- name: feats
sequence: string
- name: head
sequence: string
- name: deprel
sequence: string
- name: deps
sequence: string
- name: misc
sequence: string
splits:
- name: train
num_bytes: 1254757266
num_examples: 680792
- name: validation
num_bytes: 189534106
num_examples: 101106
- name: test
num_bytes: 193801708
num_examples: 101594
download_size: 212492611
dataset_size: 1638093080
---
# Dataset Card for NoReC
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/ltgoslo/norec
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2018/pdf/851.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
This dataset contains Norwegian Review Corpus (NoReC), created for the purpose of training and evaluating models for document-level sentiment analysis. More than 43,000 full-text reviews have been collected from major Norwegian news sources and cover a range of different domains, including literature, movies, video games, restaurants, music and theater, in addition to product reviews across a range of categories. Each review is labeled with a manually assigned score of 1–6, as provided by the rating of the original author.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The sentences in the dataset are in Norwegian (nb, nn, no).
## Dataset Structure
### Data Instances
A sample from training set is provided below:
```
{'deprel': ['det',
'amod',
'cc',
'conj',
'nsubj',
'case',
'nmod',
'cop',
'case',
'case',
'root',
'flat:name',
'flat:name',
'punct'],
'deps': ['None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None'],
'feats': ["{'Gender': 'Masc', 'Number': 'Sing', 'PronType': 'Dem'}",
"{'Definite': 'Def', 'Degree': 'Pos', 'Number': 'Sing'}",
'None',
"{'Definite': 'Def', 'Degree': 'Pos', 'Number': 'Sing'}",
"{'Definite': 'Def', 'Gender': 'Masc', 'Number': 'Sing'}",
'None',
'None',
"{'Mood': 'Ind', 'Tense': 'Pres', 'VerbForm': 'Fin'}",
'None',
'None',
'None',
'None',
'None',
'None'],
'head': ['5',
'5',
'4',
'2',
'11',
'7',
'5',
'11',
'11',
'11',
'0',
'11',
'11',
'11'],
'idx': '000000-02-01',
'lemmas': ['den',
'andre',
'og',
'sist',
'sesong',
'av',
'Rome',
'være',
'ute',
'på',
'DVD',
'i',
'Norge',
'$.'],
'misc': ['None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
"{'SpaceAfter': 'No'}",
'None'],
'pos_tags': [5, 0, 4, 0, 7, 1, 11, 3, 1, 1, 11, 1, 11, 12],
'text': 'Den andre og siste sesongen av Rome er ute på DVD i Norge.',
'tokens': ['Den',
'andre',
'og',
'siste',
'sesongen',
'av',
'Rome',
'er',
'ute',
'på',
'DVD',
'i',
'Norge',
'.'],
'xpos_tags': ['None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None',
'None']}
```
### Data Fields
The data instances have the following fields:
- deprel: [More Information Needed]
- deps: [More Information Needed]
- feats: [More Information Needed]
- head: [More Information Needed]
- idx: index
- lemmas: lemmas of all tokens
- misc: [More Information Needed]
- pos_tags: part of speech tags
- text: text string
- tokens: tokens
- xpos_tags: [More Information Needed]
The part of speech taggs correspond to these labels: "ADJ" (0), "ADP" (1), "ADV" (2), "AUX" (3), "CCONJ" (4), "DET" (5), "INTJ" (6), "NOUN" (7), "NUM" (8), "PART" (9), "PRON" (10), "PROPN" (11), "PUNCT" (12), "SCONJ" (13), "SYM" (14), "VERB" (15), "X" (16),
### Data Splits
The training, validation, and test set contain `680792`, `101106`, and `101594` sentences respectively.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@InProceedings{VelOvrBer18,
author = {Erik Velldal and Lilja {\O}vrelid and
Eivind Alexander Bergem and Cathrine Stadsnes and
Samia Touileb and Fredrik J{\o}rgensen},
title = {{NoReC}: The {N}orwegian {R}eview {C}orpus},
booktitle = {Proceedings of the 11th edition of the
Language Resources and Evaluation Conference},
year = {2018},
address = {Miyazaki, Japan},
pages = {4186--4191}
}
```
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. |
norne | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- 'no'
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: 'NorNE: Norwegian Named Entities'
dataset_info:
- config_name: bokmaal
features:
- name: idx
dtype: string
- name: lang
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': NOUN
'1': PUNCT
'2': ADP
'3': NUM
'4': SYM
'5': SCONJ
'6': ADJ
'7': PART
'8': DET
'9': CCONJ
'10': PROPN
'11': PRON
'12': X
'13': ADV
'14': INTJ
'15': VERB
'16': AUX
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-GPE_LOC
'6': I-GPE_LOC
'7': B-PROD
'8': I-PROD
'9': B-LOC
'10': I-LOC
'11': B-GPE_ORG
'12': I-GPE_ORG
'13': B-DRV
'14': I-DRV
'15': B-EVT
'16': I-EVT
'17': B-MISC
'18': I-MISC
splits:
- name: train
num_bytes: 10032169
num_examples: 15696
- name: validation
num_bytes: 1501730
num_examples: 2410
- name: test
num_bytes: 1234272
num_examples: 1939
download_size: 20909241
dataset_size: 12768171
- config_name: nynorsk
features:
- name: idx
dtype: string
- name: lang
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': NOUN
'1': PUNCT
'2': ADP
'3': NUM
'4': SYM
'5': SCONJ
'6': ADJ
'7': PART
'8': DET
'9': CCONJ
'10': PROPN
'11': PRON
'12': X
'13': ADV
'14': INTJ
'15': VERB
'16': AUX
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-GPE_LOC
'6': I-GPE_LOC
'7': B-PROD
'8': I-PROD
'9': B-LOC
'10': I-LOC
'11': B-GPE_ORG
'12': I-GPE_ORG
'13': B-DRV
'14': I-DRV
'15': B-EVT
'16': I-EVT
'17': B-MISC
'18': I-MISC
splits:
- name: train
num_bytes: 10072260
num_examples: 14174
- name: validation
num_bytes: 1278029
num_examples: 1890
- name: test
num_bytes: 1023358
num_examples: 1511
download_size: 20209253
dataset_size: 12373647
- config_name: combined
features:
- name: idx
dtype: string
- name: lang
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': NOUN
'1': PUNCT
'2': ADP
'3': NUM
'4': SYM
'5': SCONJ
'6': ADJ
'7': PART
'8': DET
'9': CCONJ
'10': PROPN
'11': PRON
'12': X
'13': ADV
'14': INTJ
'15': VERB
'16': AUX
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-GPE_LOC
'6': I-GPE_LOC
'7': B-PROD
'8': I-PROD
'9': B-LOC
'10': I-LOC
'11': B-GPE_ORG
'12': I-GPE_ORG
'13': B-DRV
'14': I-DRV
'15': B-EVT
'16': I-EVT
'17': B-MISC
'18': I-MISC
splits:
- name: train
num_bytes: 20104393
num_examples: 29870
- name: validation
num_bytes: 2779723
num_examples: 4300
- name: test
num_bytes: 2257594
num_examples: 3450
download_size: 41118494
dataset_size: 25141710
- config_name: bokmaal-7
features:
- name: idx
dtype: string
- name: lang
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': NOUN
'1': PUNCT
'2': ADP
'3': NUM
'4': SYM
'5': SCONJ
'6': ADJ
'7': PART
'8': DET
'9': CCONJ
'10': PROPN
'11': PRON
'12': X
'13': ADV
'14': INTJ
'15': VERB
'16': AUX
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-PROD
'6': I-PROD
'7': B-LOC
'8': I-LOC
'9': B-DRV
'10': I-DRV
'11': B-EVT
'12': I-EVT
'13': B-MISC
'14': I-MISC
splits:
- name: train
num_bytes: 10032169
num_examples: 15696
- name: validation
num_bytes: 1501730
num_examples: 2410
- name: test
num_bytes: 1234272
num_examples: 1939
download_size: 20909241
dataset_size: 12768171
- config_name: nynorsk-7
features:
- name: idx
dtype: string
- name: lang
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': NOUN
'1': PUNCT
'2': ADP
'3': NUM
'4': SYM
'5': SCONJ
'6': ADJ
'7': PART
'8': DET
'9': CCONJ
'10': PROPN
'11': PRON
'12': X
'13': ADV
'14': INTJ
'15': VERB
'16': AUX
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-PROD
'6': I-PROD
'7': B-LOC
'8': I-LOC
'9': B-DRV
'10': I-DRV
'11': B-EVT
'12': I-EVT
'13': B-MISC
'14': I-MISC
splits:
- name: train
num_bytes: 10072260
num_examples: 14174
- name: validation
num_bytes: 1278029
num_examples: 1890
- name: test
num_bytes: 1023358
num_examples: 1511
download_size: 20209253
dataset_size: 12373647
- config_name: combined-7
features:
- name: idx
dtype: string
- name: lang
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': NOUN
'1': PUNCT
'2': ADP
'3': NUM
'4': SYM
'5': SCONJ
'6': ADJ
'7': PART
'8': DET
'9': CCONJ
'10': PROPN
'11': PRON
'12': X
'13': ADV
'14': INTJ
'15': VERB
'16': AUX
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-PROD
'6': I-PROD
'7': B-LOC
'8': I-LOC
'9': B-DRV
'10': I-DRV
'11': B-EVT
'12': I-EVT
'13': B-MISC
'14': I-MISC
splits:
- name: train
num_bytes: 20104393
num_examples: 29870
- name: validation
num_bytes: 2779723
num_examples: 4300
- name: test
num_bytes: 2257594
num_examples: 3450
download_size: 41118494
dataset_size: 25141710
- config_name: bokmaal-8
features:
- name: idx
dtype: string
- name: lang
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': NOUN
'1': PUNCT
'2': ADP
'3': NUM
'4': SYM
'5': SCONJ
'6': ADJ
'7': PART
'8': DET
'9': CCONJ
'10': PROPN
'11': PRON
'12': X
'13': ADV
'14': INTJ
'15': VERB
'16': AUX
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-PROD
'6': I-PROD
'7': B-LOC
'8': I-LOC
'9': B-GPE
'10': I-GPE
'11': B-DRV
'12': I-DRV
'13': B-EVT
'14': I-EVT
'15': B-MISC
'16': I-MISC
splits:
- name: train
num_bytes: 10032169
num_examples: 15696
- name: validation
num_bytes: 1501730
num_examples: 2410
- name: test
num_bytes: 1234272
num_examples: 1939
download_size: 20909241
dataset_size: 12768171
- config_name: nynorsk-8
features:
- name: idx
dtype: string
- name: lang
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': NOUN
'1': PUNCT
'2': ADP
'3': NUM
'4': SYM
'5': SCONJ
'6': ADJ
'7': PART
'8': DET
'9': CCONJ
'10': PROPN
'11': PRON
'12': X
'13': ADV
'14': INTJ
'15': VERB
'16': AUX
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-PROD
'6': I-PROD
'7': B-LOC
'8': I-LOC
'9': B-GPE
'10': I-GPE
'11': B-DRV
'12': I-DRV
'13': B-EVT
'14': I-EVT
'15': B-MISC
'16': I-MISC
splits:
- name: train
num_bytes: 10072260
num_examples: 14174
- name: validation
num_bytes: 1278029
num_examples: 1890
- name: test
num_bytes: 1023358
num_examples: 1511
download_size: 20209253
dataset_size: 12373647
- config_name: combined-8
features:
- name: idx
dtype: string
- name: lang
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': NOUN
'1': PUNCT
'2': ADP
'3': NUM
'4': SYM
'5': SCONJ
'6': ADJ
'7': PART
'8': DET
'9': CCONJ
'10': PROPN
'11': PRON
'12': X
'13': ADV
'14': INTJ
'15': VERB
'16': AUX
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-PROD
'6': I-PROD
'7': B-LOC
'8': I-LOC
'9': B-GPE
'10': I-GPE
'11': B-DRV
'12': I-DRV
'13': B-EVT
'14': I-EVT
'15': B-MISC
'16': I-MISC
splits:
- name: train
num_bytes: 20104393
num_examples: 29870
- name: validation
num_bytes: 2779723
num_examples: 4300
- name: test
num_bytes: 2257594
num_examples: 3450
download_size: 41118494
dataset_size: 25141710
---
# Dataset Card for NorNE: Norwegian Named Entities
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [NorNE](https://github.com/ltgoslo/norne/)
- **Repository:** [Github](https://github.com/ltgoslo/norne/)
- **Paper:** https://arxiv.org/abs/1911.12146
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
NorNE is a manually annotated corpus of named entities which extends the annotation of the existing Norwegian Dependency Treebank. Comprising both of the official standards of written Norwegian (Bokmål and Nynorsk), the corpus contains around 600,000 tokens and annotates a rich set of entity types including persons,organizations, locations, geo-political entities, products, and events, in addition to a class corresponding to nominals derived from names.
There are 3 main configs in this dataset each with 3 versions of the NER tag set. When accessing the `bokmaal`, `nynorsk`, or `combined` configs the NER tag set will be comprised of 9 tags: `GPE_ORG`, `GPE_LOC`, `ORG`, `LOC`, `PER`, `PROD`, `EVT`, `DRV`, and `MISC`. The two special types `GPE_LOC` and `GPE_ORG` can easily be altered depending on the task, choosing either the more general `GPE` tag or the more specific `LOC`/`ORG` tags, conflating them with the other annotations of the same type. To access these reduced versions of the dataset, you can use the configs `bokmaal-7`, `nynorsk-7`, `combined-7` for the NER tag set with 7 tags ( **`ORG`**, **`LOC`**, `PER`, `PROD`, `EVT`, `DRV`, `MISC`), and `bokmaal-8`, `nynorsk-8`, `combined-8` for the NER tag set with 8 tags (`LOC_` and `ORG_`: **`ORG`**, **`LOC`**, **`GPE`**, `PER`, `PROD`, `EVT`, `DRV`, `MISC`). By default, the full set (9 tags) will be used. See Annotations for further details.
### Supported Tasks and Leaderboards
NorNE ads named entity annotations on top of the Norwegian Dependency Treebank.
### Languages
Both Norwegian Bokmål (`bokmaal`) and Nynorsk (`nynorsk`) are supported as different configs in this dataset. An extra config for the combined languages is also included (`combined`). See the Annotation section for details on accessing reduced tag sets for the NER feature.
## Dataset Structure
Each entry contains text sentences, their language, identifiers, tokens, lemmas, and corresponding NER and POS tag lists.
### Data Instances
An example of the `train` split of the `bokmaal` config.
```python
{'idx': '000001',
'lang': 'bokmaal',
'lemmas': ['lam', 'og', 'piggvar', 'på', 'bryllupsmeny'],
'ner_tags': [0, 0, 0, 0, 0],
'pos_tags': [0, 9, 0, 5, 0],
'text': 'Lam og piggvar på bryllupsmenyen',
'tokens': ['Lam', 'og', 'piggvar', 'på', 'bryllupsmenyen']}
```
### Data Fields
Each entry is annotated with the next fields:
- `idx` (`int`), text (sentence) identifier from the NorNE dataset
- `lang` (`str`), language variety, either `bokmaal`, `nynorsk` or `combined`
- `text` (`str`), plain text
- `tokens` (`List[str]`), list of tokens extracted from `text`
- `lemmas` (`List[str]`), list of lemmas extracted from `tokens`
- `ner_tags` (`List[int]`), list of numeric NER tags for each token in `tokens`
- `pos_tags` (`List[int]`), list of numeric PoS tags for each token in `tokens`
An example DataFrame obtained from the dataset:
<table class="dataframe" border="1">
<thead>
<tr style="text-align: right;">
<th></th>
<th>idx</th>
<th>lang</th>
<th>text</th>
<th>tokens</th>
<th>lemmas</th>
<th>ner_tags</th>
<th>pos_tags</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>000001</td>
<td>bokmaal</td>
<td>Lam og piggvar på bryllupsmenyen</td>
<td>[Lam, og, piggvar, på, bryllupsmenyen]</td>
<td>[lam, og, piggvar, på, bryllupsmeny]</td>
<td>[0, 0, 0, 0, 0]</td>
<td>[0, 9, 0, 5, 0]</td>
</tr>
<tr>
<th>1</th>
<td>000002</td>
<td>bokmaal</td>
<td>Kamskjell, piggvar og lammefilet sto på menyen...</td>
<td>[Kamskjell, ,, piggvar, og, lammefilet, sto, p...</td>
<td>[kamskjell, $,, piggvar, og, lammefilet, stå, ...</td>
<td>[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]</td>
<td>[0, 1, 0, 9, 0, 15, 2, 0, 2, 8, 6, 0, 1]</td>
</tr>
<tr>
<th>2</th>
<td>000003</td>
<td>bokmaal</td>
<td>Og til dessert: Parfait à la Mette-Marit.</td>
<td>[Og, til, dessert, :, Parfait, à, la, Mette-Ma...</td>
<td>[og, til, dessert, $:, Parfait, à, la, Mette-M...</td>
<td>[0, 0, 0, 0, 7, 8, 8, 8, 0]</td>
<td>[9, 2, 0, 1, 10, 12, 12, 10, 1]</td>
</tr>
</tbody>
</table>
### Data Splits
There are three splits: `train`, `validation` and `test`.
| Config | Split | Total |
| :---------|-------------:|-------:|
| `bokmaal` | `train` | 15696 |
| `bokmaal` | `validation` | 2410 |
| `bokmaal` | `test` | 1939 |
| `nynorsk` | `train` | 14174 |
| `nynorsk` | `validation` | 1890 |
| `nynorsk` | `test` | 1511 |
| `combined`| `test` | 29870 |
| `combined`| `validation` | 4300 |
| `combined`| `test` | 3450 |
## Dataset Creation
### Curation Rationale
1. A _name_ in this context is close to [Saul Kripke's definition of a name](https://en.wikipedia.org/wiki/Saul_Kripke#Naming_and_Necessity),
in that a name has a unique reference and its meaning is constant (there are exceptions in the annotations, e.g. "Regjeringen" (en. "Government")).
2. It is the usage of a name that determines the entity type, not the default/literal sense of the name,
3. If there is an ambiguity in the type/sense of a name, then the the default/literal sense of the name is chosen
(following [Markert and Nissim, 2002](http://www.lrec-conf.org/proceedings/lrec2002/pdf/11.pdf)).
For more details, see the "Annotation Guidelines.pdf" distributed with the corpus.
### Source Data
Data was collected using blogs and newspapers in Norwegian, as well as parliament speeches and governamental reports.
#### Initial Data Collection and Normalization
The texts in the Norwegian Dependency Treebank (NDT) are manually annotated with morphological features, syntactic functions
and hierarchical structure. The formalism used for the syntactic annotation is dependency grammar.
The treebanks consists of two parts, one part in Norwegian Bokmål (`nob`) and one part in Norwegian Nynorsk (`nno`).
Both parts contain around 300.000 tokens, and are a mix of different non-fictional genres.
See the [NDT webpage](https://www.nb.no/sprakbanken/show?serial=sbr-10) for more details.
### Annotations
The following types of entities are annotated:
- **Person (`PER`):** Real or fictional characters and animals
- **Organization (`ORG`):** Any collection of people, such as firms, institutions, organizations, music groups,
sports teams, unions, political parties etc.
- **Location (`LOC`):** Geographical places, buildings and facilities
- **Geo-political entity (`GPE`):** Geographical regions defined by political and/or social groups.
A GPE entity subsumes and does not distinguish between a nation, its region, its government, or its people
- **Product (`PROD`):** Artificially produced entities are regarded products. This may include more abstract entities, such as speeches,
radio shows, programming languages, contracts, laws and ideas.
- **Event (`EVT`):** Festivals, cultural events, sports events, weather phenomena, wars, etc. Events are bounded in time and space.
- **Derived (`DRV`):** Words (and phrases?) that are dervied from a name, but not a name in themselves. They typically contain a full name and are capitalized, but are not proper nouns. Examples (fictive) are "Brann-treneren" ("the Brann coach") or "Oslo-mannen" ("the man from Oslo").
- **Miscellaneous (`MISC`):** Names that do not belong in the other categories. Examples are animals species and names of medical conditions. Entities that are manufactured or produced are of type Products, whereas thing naturally or spontaneously occurring are of type Miscellaneous.
Furthermore, all `GPE` entities are additionally sub-categorized as being either `ORG` or `LOC`, with the two annotation levels separated by an underscore:
- `GPE_LOC`: Geo-political entity, with a locative sense (e.g. "John lives in _Spain_")
- `GPE_ORG`: Geo-political entity, with an organisation sense (e.g. "_Spain_ declined to meet with Belgium")
The two special types `GPE_LOC` and `GPE_ORG` can easily be altered depending on the task, choosing either the more general `GPE` tag or the more specific `LOC`/`ORG` tags, conflating them with the other annotations of the same type. This means that the following sets of entity types can be derived:
- 7 types, deleting `_GPE`: **`ORG`**, **`LOC`**, `PER`, `PROD`, `EVT`, `DRV`, `MISC`
- 8 types, deleting `LOC_` and `ORG_`: **`ORG`**, **`LOC`**, **`GPE`**, `PER`, `PROD`, `EVT`, `DRV`, `MISC`
- 9 types, keeping all types: **`ORG`**, **`LOC`**, **`GPE_LOC`**, **`GPE_ORG`**, `PER`, `PROD`, `EVT`, `DRV`, `MISC`
The class distribution is as follows, broken down across the data splits of the UD version of NDT, and sorted by total counts (i.e. the number of examples, not tokens within the spans of the annotatons):
| Type | Train | Dev | Test | Total |
| :--------|-------:|-------:|-------:|-------:|
| `PER` | 4033 | 607 | 560 | 5200 |
| `ORG` | 2828 | 400 | 283 | 3511 |
| `GPE_LOC`| 2132 | 258 | 257 | 2647 |
| `PROD` | 671 | 162 | 71 | 904 |
| `LOC` | 613 | 109 | 103 | 825 |
| `GPE_ORG`| 388 | 55 | 50 | 493 |
| `DRV` | 519 | 77 | 48 | 644 |
| `EVT` | 131 | 9 | 5 | 145 |
| `MISC` | 8 | 0 | 0 | 0 |
To access these reduced versions of the dataset, you can use the configs `bokmaal-7`, `nynorsk-7`, `combined-7` for the NER tag set with 7 tags ( **`ORG`**, **`LOC`**, `PER`, `PROD`, `EVT`, `DRV`, `MISC`), and `bokmaal-8`, `nynorsk-8`, `combined-8` for the NER tag set with 8 tags (`LOC_` and `ORG_`: **`ORG`**, **`LOC`**, **`GPE`**, `PER`, `PROD`, `EVT`, `DRV`, `MISC`). By default, the full set (9 tags) will be used.
## Additional Information
### Dataset Curators
NorNE was created as a collaboration between [Schibsted Media Group](https://schibsted.com/), [Språkbanken](https://www.nb.no/forskning/sprakbanken/) at the [National Library of Norway](https://www.nb.no) and the [Language Technology Group](https://www.mn.uio.no/ifi/english/research/groups/ltg/) at the University of Oslo.
NorNE was added to 🤗 Datasets by the AI-Lab at the National Library of Norway.
### Licensing Information
The NorNE corpus is published under the same [license](https://github.com/ltgoslo/norne/blob/master/LICENSE_NDT.txt) as the Norwegian Dependency Treebank
### Citation Information
This dataset is described in the paper _NorNE: Annotating Named Entities for Norwegian_ by
Fredrik Jørgensen, Tobias Aasmoe, Anne-Stine Ruud Husevåg, Lilja Øvrelid, and Erik Velldal, accepted for LREC 2020 and available as pre-print here: https://arxiv.org/abs/1911.12146.
```bibtex
@inproceedings{johansen2019ner,
title={NorNE: Annotating Named Entities for Norwegian},
author={Fredrik Jørgensen, Tobias Aasmoe, Anne-Stine Ruud Husevåg,
Lilja Øvrelid, and Erik Velldal},
booktitle={LREC 2020},
year={2020},
url={https://arxiv.org/abs/1911.12146}
}
```
### Contributions
Thanks to [@versae](https://github.com/versae) for adding this dataset. |
norwegian_ner | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- 'no'
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: Norwegian NER
dataset_info:
- config_name: bokmaal
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': NOUN
'1': PUNCT
'2': ADP
'3': NUM
'4': SYM
'5': SCONJ
'6': ADJ
'7': PART
'8': DET
'9': CCONJ
'10': PROPN
'11': PRON
'12': X
'13': ADV
'14': INTJ
'15': VERB
'16': AUX
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-OTH
'2': I-OTH
'3': E-OTH
'4': S-OTH
'5': B-ORG
'6': I-ORG
'7': E-ORG
'8': S-ORG
'9': B-PRS
'10': I-PRS
'11': E-PRS
'12': S-PRS
'13': B-GEO
'14': I-GEO
'15': E-GEO
'16': S-GEO
splits:
- name: train
num_bytes: 9859760
num_examples: 15696
- name: validation
num_bytes: 1475216
num_examples: 2410
- name: test
num_bytes: 1212939
num_examples: 1939
download_size: 8747760
dataset_size: 12547915
- config_name: nynorsk
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': NOUN
'1': PUNCT
'2': ADP
'3': NUM
'4': SYM
'5': SCONJ
'6': ADJ
'7': PART
'8': DET
'9': CCONJ
'10': PROPN
'11': PRON
'12': X
'13': ADV
'14': INTJ
'15': VERB
'16': AUX
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-OTH
'2': I-OTH
'3': E-OTH
'4': S-OTH
'5': B-ORG
'6': I-ORG
'7': E-ORG
'8': S-ORG
'9': B-PRS
'10': I-PRS
'11': E-PRS
'12': S-PRS
'13': B-GEO
'14': I-GEO
'15': E-GEO
'16': S-GEO
splits:
- name: train
num_bytes: 9916338
num_examples: 14174
- name: validation
num_bytes: 1257235
num_examples: 1890
- name: test
num_bytes: 1006733
num_examples: 1511
download_size: 8484545
dataset_size: 12180306
- config_name: samnorsk
features:
- name: idx
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': NOUN
'1': PUNCT
'2': ADP
'3': NUM
'4': SYM
'5': SCONJ
'6': ADJ
'7': PART
'8': DET
'9': CCONJ
'10': PROPN
'11': PRON
'12': X
'13': ADV
'14': INTJ
'15': VERB
'16': AUX
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-OTH
'2': I-OTH
'3': E-OTH
'4': S-OTH
'5': B-ORG
'6': I-ORG
'7': E-ORG
'8': S-ORG
'9': B-PRS
'10': I-PRS
'11': E-PRS
'12': S-PRS
'13': B-GEO
'14': I-GEO
'15': E-GEO
'16': S-GEO
splits:
- name: train
num_bytes: 22508485
num_examples: 34170
- name: validation
num_bytes: 2732419
num_examples: 4300
- name: test
num_bytes: 2219640
num_examples: 3450
download_size: 19133049
dataset_size: 27460544
---
# Dataset Card for Norwegian NER
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/ljos/navnkjenner)
- **Repository:** [Github](https://github.com/ljos/navnkjenner)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@jplu](https://github.com/jplu) for adding this dataset. |
nq_open | ---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- en
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
pretty_name: NQ-Open
size_categories:
- 10K<n<100K
source_datasets:
- extended|natural_questions
task_categories:
- question-answering
task_ids:
- open-domain-qa
paperswithcode_id: null
dataset_info:
features:
- name: question
dtype: string
- name: answer
sequence: string
config_name: nq_open
splits:
- name: train
num_bytes: 6651344
num_examples: 87925
- name: validation
num_bytes: 313841
num_examples: 3610
download_size: 8913614
dataset_size: 6965185
---
# Dataset Card for nq_open
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://efficientqa.github.io/
- **Repository:** https://github.com/google-research-datasets/natural-questions/tree/master/nq_open
- **Paper:** https://www.aclweb.org/anthology/P19-1612.pdf
- **Leaderboard:** https://ai.google.com/research/NaturalQuestions/efficientqa
- **Point of Contact:** [Mailing List](efficientqa@googlegroups.com)
### Dataset Summary
The NQ-Open task, introduced by Lee et.al. 2019,
is an open domain question answering benchmark that is derived from Natural Questions.
The goal is to predict an English answer string for an input English question.
All questions can be answered using the contents of English Wikipedia.
### Supported Tasks and Leaderboards
Open Domain Question-Answering,
EfficientQA Leaderboard: https://ai.google.com/research/NaturalQuestions/efficientqa
### Languages
English (`en`)
## Dataset Structure
### Data Instances
```
{
"question": "names of the metropolitan municipalities in south africa",
"answer": [
"Mangaung Metropolitan Municipality",
"Nelson Mandela Bay Metropolitan Municipality",
"eThekwini Metropolitan Municipality",
"City of Tshwane Metropolitan Municipality",
"City of Johannesburg Metropolitan Municipality",
"Buffalo City Metropolitan Municipality",
"City of Ekurhuleni Metropolitan Municipality"
]
}
```
### Data Fields
- `question` - Input open domain question.
- `answer` - List of possible answers to the question
### Data Splits
- Train : 87925
- validation : 1800
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
Natural Questions contains question from aggregated queries to Google Search (Kwiatkowski et al., 2019). To gather an open version of this dataset, we only keep questions with short answers and discard the given evidence document. Answers with many tokens often resemble extractive snippets rather than canonical answers, so we discard answers with more than 5 tokens.
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
Evaluating on this diverse set of question-answer pairs is crucial, because all existing datasets have inherent biases that are problematic for open domain QA systems with learned retrieval.
In the Natural Questions dataset the question askers do not already know the answer. This accurately reflects a distribution of genuine information-seeking questions.
However, annotators must separately find correct answers, which requires assistance from automatic tools and can introduce a moderate bias towards results from the tool.
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
All of the Natural Questions data is released under the
[CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/) license.
### Citation Information
```
@article{doi:10.1162/tacl\_a\_00276,
author = {Kwiatkowski, Tom and Palomaki, Jennimaria and Redfield, Olivia and Collins, Michael and Parikh, Ankur and Alberti, Chris and Epstein, Danielle and Polosukhin, Illia and Devlin, Jacob and Lee, Kenton and Toutanova, Kristina and Jones, Llion and Kelcey, Matthew and Chang, Ming-Wei and Dai, Andrew M. and Uszkoreit, Jakob and Le, Quoc and Petrov, Slav},
title = {Natural Questions: A Benchmark for Question Answering Research},
journal = {Transactions of the Association for Computational Linguistics},
volume = {7},
number = {},
pages = {453-466},
year = {2019},
doi = {10.1162/tacl\_a\_00276},
URL = {
https://doi.org/10.1162/tacl_a_00276
},
eprint = {
https://doi.org/10.1162/tacl_a_00276
},
abstract = { We present the Natural Questions corpus, a question answering data set. Questions consist of real anonymized, aggregated queries issued to the Google search engine. An annotator is presented with a question along with a Wikipedia page from the top 5 search results, and annotates a long answer (typically a paragraph) and a short answer (one or more entities) if present on the page, or marks null if no long/short answer is present. The public release consists of 307,373 training examples with single annotations; 7,830 examples with 5-way annotations for development data; and a further 7,842 examples with 5-way annotated sequestered as test data. We present experiments validating quality of the data. We also describe analysis of 25-way annotations on 302 examples, giving insights into human variability on the annotation task. We introduce robust metrics for the purposes of evaluating question answering systems; demonstrate high human upper bounds on these metrics; and establish baseline results using competitive methods drawn from related literature. }
}
@inproceedings{lee-etal-2019-latent,
title = "Latent Retrieval for Weakly Supervised Open Domain Question Answering",
author = "Lee, Kenton and
Chang, Ming-Wei and
Toutanova, Kristina",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/P19-1612",
doi = "10.18653/v1/P19-1612",
pages = "6086--6096",
abstract = "Recent work on open domain question answering (QA) assumes strong supervision of the supporting evidence and/or assumes a blackbox information retrieval (IR) system to retrieve evidence candidates. We argue that both are suboptimal, since gold evidence is not always available, and QA is fundamentally different from IR. We show for the first time that it is possible to jointly learn the retriever and reader from question-answer string pairs and without any IR system. In this setting, evidence retrieval from all of Wikipedia is treated as a latent variable. Since this is impractical to learn from scratch, we pre-train the retriever with an Inverse Cloze Task. We evaluate on open versions of five QA datasets. On datasets where the questioner already knows the answer, a traditional IR system such as BM25 is sufficient. On datasets where a user is genuinely seeking an answer, we show that learned retrieval is crucial, outperforming BM25 by up to 19 points in exact match.",
}
```
### Contributions
Thanks to [@Nilanshrajput](https://github.com/Nilanshrajput) for adding this dataset. |
nsmc | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- ko
license:
- cc-by-2.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: nsmc
pretty_name: Naver Sentiment Movie Corpus
dataset_info:
features:
- name: id
dtype: string
- name: document
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 16423803
num_examples: 150000
- name: test
num_bytes: 5491417
num_examples: 50000
download_size: 19522142
dataset_size: 21915220
---
# Dataset Card for Naver sentiment movie corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/e9t/nsmc/)
- **Repository:** [Github](https://github.com/e9t/nsmc/)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
Each instance is a movie review written by Korean internet users on Naver, the most commonly used search engine in Korea. Each row can be broken down into the following fields:
- `id`: A unique review ID, provided by Naver
- `document`: The actual movie review
- `label`: Binary labels for sentiment analysis, where `0` denotes negative, and `1`, positive
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@InProceedings{Park:2016,
title = "Naver Sentiment Movie Corpus",
author = "Lucy Park",
year = "2016",
howpublished = {\\url{https://github.com/e9t/nsmc}}
}
```
### Contributions
Thanks to [@jaketae](https://github.com/jaketae) for adding this dataset. |
numer_sense | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other
task_categories:
- text-generation
- fill-mask
task_ids:
- slot-filling
paperswithcode_id: numersense
pretty_name: NumerSense
dataset_info:
features:
- name: sentence
dtype: string
- name: target
dtype: string
splits:
- name: train
num_bytes: 825865
num_examples: 10444
- name: test_core
num_bytes: 62652
num_examples: 1132
- name: test_all
num_bytes: 184180
num_examples: 3146
download_size: 985463
dataset_size: 1072697
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://inklab.usc.edu/NumerSense/
- **Repository:** https://github.com/INK-USC/NumerSense
- **Paper:** https://arxiv.org/abs/2005.00683
- **Leaderboard:** https://inklab.usc.edu/NumerSense/#exp
- **Point of Contact:** Author emails listed in [paper](https://arxiv.org/abs/2005.00683)
### Dataset Summary
NumerSense is a new numerical commonsense reasoning probing task, with a diagnostic dataset consisting of 3,145
masked-word-prediction probes. The general idea is to mask numbers between 0-10 in sentences mined from a commonsense
corpus and evaluate whether a language model can correctly predict the masked value.
### Supported Tasks and Leaderboards
The dataset supports the task of slot-filling, specifically as an evaluation of numerical common sense. A leaderboard
is included on the [dataset webpage](https://inklab.usc.edu/NumerSense/#exp) with included benchmarks for GPT-2,
RoBERTa, BERT, and human performance. Leaderboards are included for both the core set and the adversarial set
discussed below.
### Languages
This dataset is in English.
## Dataset Structure
### Data Instances
Each instance consists of a sentence with a masked numerical value between 0-10 and (in the train set) a target.
Example from the training set:
```
sentence: Black bears are about <mask> metres tall.
target: two
```
### Data Fields
Each value of the training set consists of:
- `sentence`: The sentence with a number masked out with the `<mask>` token.
- `target`: The ground truth target value. Since the test sets do not include the ground truth, the `target` field
values are empty strings in the `test_core` and `test_all` splits.
### Data Splits
The dataset includes the following pre-defined data splits:
- A train set with >10K labeled examples (i.e. containing a ground truth value)
- A core test set (`test_core`) with 1,132 examples (no ground truth provided)
- An expanded test set (`test_all`) encompassing `test_core` with the addition of adversarial examples for a total of
3,146 examples. See section 2.2 of [the paper] for a discussion of how these examples are constructed.
## Dataset Creation
### Curation Rationale
The purpose of this dataset is "to study whether PTLMs capture numerical commonsense knowledge, i.e., commonsense
knowledge that provides an understanding of the numeric relation between entities." This work is motivated by the
prior research exploring whether language models possess _commonsense knowledge_.
### Source Data
#### Initial Data Collection and Normalization
The dataset is an extension of the [Open Mind Common Sense](https://huggingface.co/datasets/open_mind_common_sense)
corpus. A query was performed to discover sentences containing numbers between 0-12, after which the resulting
sentences were manually evaluated for inaccuracies, typos, and the expression of commonsense knowledge. The numerical
values were then masked.
#### Who are the source language producers?
The [Open Mind Common Sense](https://huggingface.co/datasets/open_mind_common_sense) corpus, from which this dataset
is sourced, is a crowdsourced dataset maintained by the MIT Media Lab.
### Annotations
#### Annotation process
No annotations are present in this dataset beyond the `target` values automatically sourced from the masked
sentences, as discussed above.
#### Who are the annotators?
The curation and inspection was done in two rounds by graduate students.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
The motivation of measuring a model's ability to associate numerical values with real-world concepts appears
relatively innocuous. However, as discussed in the following section, the source dataset may well have biases encoded
from crowdworkers, particularly in terms of factoid coverage. A model's ability to perform well on this benchmark
should therefore not be considered evidence that it is more unbiased or objective than a human performing similar
tasks.
[More Information Needed]
### Discussion of Biases
This dataset is sourced from a crowdsourced commonsense knowledge base. While the information contained in the graph
is generally considered to be of high quality, the coverage is considered to very low as a representation of all
possible commonsense knowledge. The representation of certain factoids may also be skewed by the demographics of the
crowdworkers. As one possible example, the term "homophobia" is connected with "Islam" in the ConceptNet knowledge
base, but not with any other religion or group, possibly due to the biases of crowdworkers contributing to the
project.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was collected by Bill Yuchen Lin, Seyeon Lee, Rahul Khanna, and Xiang Ren, Computer Science researchers
at the at the University of Southern California.
### Licensing Information
The data is hosted in a GitHub repositor with the
[MIT License](https://github.com/INK-USC/NumerSense/blob/main/LICENSE).
### Citation Information
```
@inproceedings{lin2020numersense,
title={Birds have four legs?! NumerSense: Probing Numerical Commonsense Knowledge of Pre-trained Language Models},
author={Bill Yuchen Lin and Seyeon Lee and Rahul Khanna and Xiang Ren},
booktitle={Proceedings of EMNLP},
year={2020},
note={to appear}
}
```
### Contributions
Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset. |
numeric_fused_head | ---
annotations_creators:
- crowdsourced
- expert-generated
- machine-generated
language_creators:
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids: []
paperswithcode_id: numeric-fused-head
pretty_name: Numeric Fused Heads
configs:
- identification
- resolution
tags:
- fused-head-identification
dataset_info:
- config_name: identification
features:
- name: tokens
sequence: string
- name: start_index
dtype: int32
- name: end_index
dtype: int32
- name: label
dtype:
class_label:
names:
'0': neg
'1': pos
splits:
- name: train
num_bytes: 22290345
num_examples: 165606
- name: test
num_bytes: 68282
num_examples: 500
- name: validation
num_bytes: 2474528
num_examples: 18401
download_size: 24407520
dataset_size: 24833155
- config_name: resolution
features:
- name: tokens
sequence: string
- name: line_indices
sequence: int32
- name: head
sequence: string
- name: speakers
sequence: string
- name: anchors_indices
sequence: int32
splits:
- name: train
num_bytes: 19766437
num_examples: 7412
- name: test
num_bytes: 2743071
num_examples: 1000
- name: validation
num_bytes: 2633549
num_examples: 1000
download_size: 24923403
dataset_size: 25143057
---
# Dataset Card for Numeric Fused Heads
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [The Numeric Fused-Head demo](https://nlp.biu.ac.il/~lazary/fh/)
- **Repository:** [Github Repo](https://github.com/yanaiela/num_fh)
- **Paper:** [Where’s My Head? Definition, Dataset and Models for Numeric Fused-Heads Identification and Resolution](https://www.mitpressjournals.org/doi/full/10.1162/tacl_a_00280)
- **Leaderboard:** [NLP Progress](http://nlpprogress.com/english/missing_elements.html)
- **Point of Contact:** [Yanai Elazar](https://yanaiela.github.io), [Yoav Goldberg](https://www.cs.bgu.ac.il/~yoavg/uni/)
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
- Numeric Fused Head Identification
- Numeric Fused Head Resolution
### Languages
English
## Dataset Structure
### Data Instances
## Identification
```
{
"tokens": ["It", "’s", "a", "curious", "thing", ",", "the", "death", "of", "a", "loved", "one", "."]
"start_index": 11
"end_index": 12
"label": 1
}
```
## Resolution
```
{
"tokens": ["I", "'m", "eighty", "tomorrow", ".", "Are", "you", "sure", "?"],
"line_indices": [0, 0, 0, 0, 0, 1, 1, 1, 1],
"head": ["AGE"],
"speakers": ["John Doe", "John Doe", "John Doe", "John Doe", "John Doe", "Joe Bloggs", "Joe Bloggs", "Joe Bloggs", "Joe Bloggs"],
"anchors_indices": [2]
}
```
### Data Fields
## Identification
- `tokens` - List of token strings as tokenized with [Spacy](spacy.io).
- `start_index` - Start index of the anchor.
- `end_index` - End index of the anchor.
- `label` - "pos" or "neg" depending on whether this example contains a numeric fused head.
## Resolution
- `tokens` - List of token strings as tokenized with [Spacy](spacy.io)
- `line_indices` - List of indices indicating line number (one for each token)
- `head` - Reference to the missing head. If the head exists elsewhere in the sentence this is given as a token index.
- `speakers` - List of speaker names (one for each token)
- `anchors_indices` - Index to indicate which token is the anchor (the visible number)
### Data Splits
Train, Test, Dev
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
MIT License
### Citation Information
```
@article{doi:10.1162/tacl\_a\_00280,
author = {Elazar, Yanai and Goldberg, Yoav},
title = {Where’s My Head? Definition, Data Set, and Models for Numeric Fused-Head Identification and Resolution},
journal = {Transactions of the Association for Computational Linguistics},
volume = {7},
number = {},
pages = {519-535},
year = {2019},
doi = {10.1162/tacl\_a\_00280},
}
```
### Contributions
Thanks to [@ghomasHudson](https://github.com/ghomasHudson) for adding this dataset. |
oclar | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- ar
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- text-scoring
- sentiment-classification
- sentiment-scoring
paperswithcode_id: null
pretty_name: OCLAR
dataset_info:
features:
- name: pagename
dtype: string
- name: review
dtype: string
- name: rating
dtype: int8
splits:
- name: train
num_bytes: 398204
num_examples: 3916
download_size: 382976
dataset_size: 398204
---
# Dataset Card for OCLAR
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [OCLAR homepage](http://archive.ics.uci.edu/ml/datasets/Opinion+Corpus+for+Lebanese+Arabic+Reviews+%28OCLAR%29#)
- **Paper:** [paper link](https://www.semanticscholar.org/paper/Sentiment-Classifier%3A-Logistic-Regression-for-in-Omari-Al-Hajj/9319f4d9e8b3b7bfd0d214314911c071ba7ce1a0)
- **Point of Contact:** [Marwan Al Omari](marwanalomari@yahoo.com)
### Dataset Summary
The researchers of OCLAR Marwan et al. (2019), they gathered Arabic costumer reviews [Zomato website](https://www.zomato.com/lebanon)
on wide scope of domain, including restaurants, hotels, hospitals, local shops, etc.
The corpus finally contains 3916 reviews in 5-rating scale. For this research purpose, the positive class considers
rating stars from 5 to 3 of 3465 reviews, and the negative class is represented from values of 1 and 2 of about 451
texts.
### Supported Tasks and Leaderboards
Opinion Corpus for Lebanese Arabic Reviews (OCLAR) corpus is utilizable for Arabic sentiment classification on services
reviews, including hotels, restaurants, shops, and others.
### Languages
The text in the dataset is in Arabic, mainly in Lebanese (LB). The associated BCP-47 code is `ar-LB`.
## Dataset Structure
### Data Instances
A typical data point comprises a `pagename` which is the name of service / location being reviewed, a `review` which is
the review left by the user / client , and a `rating` which is a score between 1 and 5.
The authors consider a review to be positive if the score is greater or equal than `3`, else it is considered negative.
An example from the OCLAR data set looks as follows:
```
"pagename": 'Ramlet Al Baida Beirut Lebanon',
"review": 'مكان يطير العقل ويساعد على الاسترخاء',
"rating": 5,
```
### Data Fields
- `pagename`: string name of the service / location being reviewed
- `review`: string review left by the user / costumer
- `rating`: number of stars left by the reviewer. It ranges from 1 to 5.
### Data Splits
The data set comes in a single csv file of a total `3916` reviews :
- `3465` are considered positive (a rating of 3 to 5)
- `451` are considered negative (a rating of 1 or 2)
## Dataset Creation
### Curation Rationale
This dataset was created for Arabic sentiment classification on services’ reviews in Lebanon country.
Reviews are about public services, including hotels, restaurants, shops, and others.
### Source Data
#### Initial Data Collection and Normalization
The data was collected from Google Reviews and [Zomato website](https://www.zomato.com/lebanon)
#### Who are the source language producers?
The source language producers are people who posted their reviews on Google Reviews or [Zomato website](https://www.zomato.com/lebanon).
They're mainly Arabic speaking Lebanese people.
### Annotations
#### Annotation process
The dataset does not contain any additional annotations
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
The author's research has tackled a highly important task of sentiment analysis for Arabic language in the Lebanese
context on 3916 reviews’ services from Google and Zomato. Experiments show three main findings:
1) The classifier is confident when used to predict positive reviews,
2) while it is biased on predicting reviews with negative sentiment, and finally
3) the low percentage of negative reviews in the corpus contributes to the diffidence of LR.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was curated by Marwan Al Omari, Moustafa Al-Hajj from Centre for Language Sciences and Communication,
Lebanese University, Beirut, Lebanon; Nacereddine Hammami from college of Computer and Information Sciences,
Jouf University, Aljouf, KSA; and Amani Sabra from Centre for Language Sciences and Communication, Lebanese University,
Beirut, Lebanon.
### Licensing Information
[More Information Needed]
### Citation Information
- Marwan Al Omari, Centre for Language Sciences and Communication, Lebanese University, Beirut, Lebanon, marwanalomari '@' yahoo.com
- Moustafa Al-Hajj, Centre for Language Sciences and Communication, Lebanese University, Beirut, Lebanon, moustafa.alhajj '@' ul.edu.lb
- Nacereddine Hammami, college of Computer and Information Sciences, Jouf University, Aljouf, KSA, n.hammami '@' ju.edu.sa
- Amani Sabra, Centre for Language Sciences and Communication, Lebanese University, Beirut, Lebanon, amani.sabra '@' ul.edu.lb
```
@misc{Dua:2019 ,
author = "Dua, Dheeru and Graff, Casey",
year = "2017",
title = "{UCI} Machine Learning Repository",
url = "http://archive.ics.uci.edu/ml",
institution = "University of California, Irvine, School of Information and Computer Sciences" }
@InProceedings{AlOmari2019oclar,
title = {Sentiment Classifier: Logistic Regression for Arabic Services Reviews in Lebanon},
authors={Al Omari, M., Al-Hajj, M., Hammami, N., & Sabra, A.},
year={2019}
}
```
### Contributions
Thanks to [@alaameloh](https://github.com/alaameloh) for adding this dataset. |
offcombr | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- pt
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
paperswithcode_id: offcombr
pretty_name: Offensive Comments in the Brazilian Web
tags:
- hate-speech-detection
dataset_info:
- config_name: offcombr-2
features:
- name: label
dtype:
class_label:
names:
'0': 'no'
'1': 'yes'
- name: text
dtype: string
splits:
- name: train
num_bytes: 105703
num_examples: 1250
download_size: 99956
dataset_size: 105703
- config_name: offcombr-3
features:
- name: label
dtype:
class_label:
names:
'0': 'no'
'1': 'yes'
- name: text
dtype: string
splits:
- name: train
num_bytes: 90094
num_examples: 1033
download_size: 85215
dataset_size: 90094
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://www.inf.ufrgs.br/~rppelle/hatedetector/
- **Repository:** https://github.com/rogersdepelle/OffComBR
- **Paper:** https://sol.sbc.org.br/index.php/brasnam/article/view/3260/3222
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
OffComBR: an annotated dataset containing for hate speech detection in Portuguese composed of news comments on the Brazilian Web.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@hugoabonizio](https://github.com/hugoabonizio) for adding this dataset. |
offenseval2020_tr | ---
annotations_creators:
- found
language_creators:
- found
language:
- tr
license:
- cc-by-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
pretty_name: OffensEval-TR 2020
tags:
- offensive-language-classification
dataset_info:
features:
- name: id
dtype: int32
- name: tweet
dtype: string
- name: subtask_a
dtype:
class_label:
names:
'0': NOT
'1': 'OFF'
config_name: offenseval2020-turkish
splits:
- name: train
num_bytes: 4260505
num_examples: 31756
- name: test
num_bytes: 481300
num_examples: 3528
download_size: 2048258
dataset_size: 4741805
---
# Dataset Card for OffensEval-TR 2020
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [offensive-turkish](https://coltekin.github.io/offensive-turkish/)
- **Paper:** [A Corpus of Turkish Offensive Language on Social Media](https://coltekin.github.io/offensive-turkish/troff.pdf)
- **Point of Contact:** [Çağrı Çöltekin](ccoltekin@sfs.uni-tuebingen.de)
### Dataset Summary
The file offenseval-tr-training-v1.tsv contains 31,756 annotated tweets.
The file offenseval-annotation.txt contains a short summary of the annotation guidelines.
Twitter user mentions were substituted by @USER and URLs have been substitute by URL.
Each instance contains up to 1 labels corresponding to one of the following sub-task:
- Sub-task A: Offensive language identification;
### Supported Tasks and Leaderboards
The dataset was published on this [paper](https://coltekin.github.io/offensive-turkish/troff.pdf).
### Languages
The dataset is based on Turkish.
## Dataset Structure
### Data Instances
A binary dataset with with (NOT) Not Offensive and (OFF) Offensive tweets.
### Data Fields
Instances are included in TSV format as follows:
ID INSTANCE SUBA
The column names in the file are the following:
id tweet subtask_a
The labels used in the annotation are listed below.
#### Task and Labels
(A) Sub-task A: Offensive language identification
- (NOT) Not Offensive - This post does not contain offense or profanity.
- (OFF) Offensive - This post contains offensive language or a targeted (veiled or direct) offense
In our annotation, we label a post as offensive (OFF) if it contains any form of non-acceptable language (profanity) or a targeted offense, which can be veiled or direct.
### Data Splits
| train | test |
|------:|-----:|
| 31756 | 3528 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
From tweeter.
### Annotations
[More Information Needed]
#### Annotation process
We describe the labels above in a “flat” manner. However, the annotation process we follow is hierarchical. The following QA pairs give a more flowchart-like procedure to follow
1. Is the tweet in Turkish and understandable?
* No: mark tweet X for exclusion, and go to next tweet
* Yes: continue to step 2
2. Is the tweet include offensive/inappropriate language?
* No: mark the tweet non go to step 4
* Yes: continue to step 3
3. Is the offense in the tweet targeted?
* No: mark the tweet prof go to step 4
* Yes: chose one (or more) of grp, ind, *oth based on the definitions above. Please try to limit the number of labels unless it is clear that the tweet includes offense against multiple categories.
4. Was the labeling decision difficult (precise answer needs more context, tweets includes irony, or for another reason)?
* No: go to next tweet
* Yes: add the label X, go to next tweet
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The annotations are distributed under the terms of [Creative Commons Attribution License (CC-BY)](https://creativecommons.org/licenses/by/2.0/). Please cite the following paper, if you use this resource.
### Citation Information
```
@inproceedings{coltekin2020lrec,
author = {\c{C}\"{o}ltekin, \c{C}a\u{g}r{\i}},
year = {2020},
title = {A Corpus of Turkish Offensive Language on Social Media},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},
pages = {6174--6184},
address = {Marseille, France},
url = {https://www.aclweb.org/anthology/2020.lrec-1.758},
}
```
### Contributions
Thanks to [@yavuzKomecoglu](https://github.com/yavuzKomecoglu) for adding this dataset. |
offenseval_dravidian | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
- kn
- ml
- ta
license:
- cc-by-4.0
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
pretty_name: Offenseval Dravidian
configs:
- kannada
- malayalam
- tamil
tags:
- offensive-language
dataset_info:
- config_name: tamil
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': Not_offensive
'1': Offensive_Untargetede
'2': Offensive_Targeted_Insult_Individual
'3': Offensive_Targeted_Insult_Group
'4': Offensive_Targeted_Insult_Other
'5': not-Tamil
splits:
- name: train
num_bytes: 4214801
num_examples: 35139
- name: validation
num_bytes: 526108
num_examples: 4388
download_size: 5040217
dataset_size: 4740909
- config_name: malayalam
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': Not_offensive
'1': Offensive_Untargetede
'2': Offensive_Targeted_Insult_Individual
'3': Offensive_Targeted_Insult_Group
'4': Offensive_Targeted_Insult_Other
'5': not-malayalam
splits:
- name: train
num_bytes: 1944857
num_examples: 16010
- name: validation
num_bytes: 249364
num_examples: 1999
download_size: 2276736
dataset_size: 2194221
- config_name: kannada
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': Not_offensive
'1': Offensive_Untargetede
'2': Offensive_Targeted_Insult_Individual
'3': Offensive_Targeted_Insult_Group
'4': Offensive_Targeted_Insult_Other
'5': not-Kannada
splits:
- name: train
num_bytes: 567119
num_examples: 6217
- name: validation
num_bytes: 70147
num_examples: 777
download_size: 678727
dataset_size: 637266
---
# Dataset Card for Offenseval Dravidian
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://competitions.codalab.org/competitions/27654#learn_the_details
- **Repository:** https://competitions.codalab.org/competitions/27654#participate-get_data
- **Paper:** Findings of the Shared Task on {O}ffensive {L}anguage {I}dentification in {T}amil, {M}alayalam, and {K}annada
- **Leaderboard:** https://competitions.codalab.org/competitions/27654#results
- **Point of Contact:** [Bharathi Raja Chakravarthi](mailto:bharathiraja.akr@gmail.com)
### Dataset Summary
Offensive language identification is classification task in natural language processing (NLP) where the aim is to moderate and minimise offensive content in social media. It has been an active area of research in both academia and industry for the past two decades. There is an increasing demand for offensive language identification on social media texts which are largely code-mixed. Code-mixing is a prevalent phenomenon in a multilingual community and the code-mixed texts are sometimes written in non-native scripts. Systems trained on monolingual data fail on code-mixed data due to the complexity of code-switching at different linguistic levels in the text. This shared task presents a new gold standard corpus for offensive language identification of code-mixed text in Dravidian languages (Tamil-English, Malayalam-English, and Kannada-English).
### Supported Tasks and Leaderboards
The goal of this task is to identify offensive language content of the code-mixed dataset of comments/posts in Dravidian Languages ( (Tamil-English, Malayalam-English, and Kannada-English)) collected from social media. The comment/post may contain more than one sentence but the average sentence length of the corpora is 1. Each comment/post is annotated at the comment/post level. This dataset also has class imbalance problems depicting real-world scenarios.
### Languages
Code-mixed text in Dravidian languages (Tamil-English, Malayalam-English, and Kannada-English).
## Dataset Structure
### Data Instances
An example from the Tamil dataset looks as follows:
| text | label |
| :------ | :----- |
| படம் கண்டிப்பாக வெற்றி பெற வேண்டும் செம்ம vara level | Not_offensive |
| Avasara patutiya editor uhh antha bullet sequence aa nee soliruka kudathu, athu sollama iruntha movie ku konjam support aa surprise element aa irunthurukum | Not_offensive |
An example from the Malayalam dataset looks as follows:
| text | label |
| :------ | :----- |
| ഷൈലോക്ക് ന്റെ നല്ല ടീസർ ആയിട്ട് പോലും ട്രോളി നടന്ന ലാലേട്ടൻ ഫാൻസിന് കിട്ടിയൊരു നല്ലൊരു തിരിച്ചടി തന്നെ ആയിരിന്നു ബിഗ് ബ്രദർ ന്റെ ട്രെയ്ലർ | Not_offensive |
| Marana mass Ekka kku kodukku oru | Not_offensive |
An example from the Kannada dataset looks as follows:
| text | label |
| :------ | :----- |
| ನಿಜವಾಗಿಯೂ ಅದ್ಭುತ heartly heltidini... plz avrigella namma nimmellara supprt beku | Not_offensive |
| Next song gu kuda alru andre evaga yar comment madidera alla alrru like madi share madi nam industry na next level ge togond hogaona. | Not_offensive |
### Data Fields
Tamil
- `text`: Tamil-English code mixed comment.
- `label`: integer from 0 to 5 that corresponds to these values: "Not_offensive", "Offensive_Untargetede", "Offensive_Targeted_Insult_Individual", "Offensive_Targeted_Insult_Group", "Offensive_Targeted_Insult_Other", "not-Tamil"
Malayalam
- `text`: Malayalam-English code mixed comment.
- `label`: integer from 0 to 5 that corresponds to these values: "Not_offensive", "Offensive_Untargetede", "Offensive_Targeted_Insult_Individual", "Offensive_Targeted_Insult_Group", "Offensive_Targeted_Insult_Other", "not-malayalam"
Kannada
- `text`: Kannada-English code mixed comment.
- `label`: integer from 0 to 5 that corresponds to these values: "Not_offensive", "Offensive_Untargetede", "Offensive_Targeted_Insult_Individual", "Offensive_Targeted_Insult_Group", "Offensive_Targeted_Insult_Other", "not-Kannada"
### Data Splits
| | train | validation |
|-----------|------:|-----------:|
| Tamil | 35139 | 4388 |
| Malayalam | 16010 | 1999 |
| Kannada | 6217 | 777 |
## Dataset Creation
### Curation Rationale
There is an increasing demand for offensive language identification on social media texts which are largely code-mixed. Code-mixing is a prevalent phenomenon in a multilingual community and the code-mixed texts are sometimes written in non-native scripts. Systems trained on monolingual data fail on code-mixed data due to the complexity of code-switching at different linguistic levels in the text.
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
Youtube users
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
This work is licensed under a [Creative Commons Attribution 4.0 International Licence](http://creativecommons.org/licenses/by/4.0/.)
### Citation Information
```
@article{chakravarthi-etal-2021-lre,
title = "DravidianCodeMix: Sentiment Analysis and Offensive Language Identification Dataset for Dravidian Languages in Code-Mixed Text",
author = "Chakravarthi, Bharathi Raja and
Priyadharshini, Ruba and
Muralidaran, Vigneshwaran and
Jose, Navya and
Suryawanshi, Shardul and
Sherly, Elizabeth and
McCrae, John P",
journal={Language Resources and Evaluation},
publisher={Springer}
}
```
```
@inproceedings{dravidianoffensive-eacl,
title={Findings of the Shared Task on {O}ffensive {L}anguage {I}dentification in {T}amil, {M}alayalam, and {K}annada},
author={Chakravarthi, Bharathi Raja and
Priyadharshini, Ruba and
Jose, Navya and
M, Anand Kumar and
Mandl, Thomas and
Kumaresan, Prasanna Kumar and
Ponnsamy, Rahul and
V,Hariharan and
Sherly, Elizabeth and
McCrae, John Philip },
booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages",
month = April,
year = "2021",
publisher = "Association for Computational Linguistics",
year={2021}
}
```
```
@inproceedings{hande-etal-2020-kancmd,
title = "{K}an{CMD}: {K}annada {C}ode{M}ixed Dataset for Sentiment Analysis and Offensive Language Detection",
author = "Hande, Adeep and
Priyadharshini, Ruba and
Chakravarthi, Bharathi Raja",
booktitle = "Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Media",
month = dec,
year = "2020",
address = "Barcelona, Spain (Online)",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.peoples-1.6",
pages = "54--63",
abstract = "We introduce Kannada CodeMixed Dataset (KanCMD), a multi-task learning dataset for sentiment analysis and offensive language identification. The KanCMD dataset highlights two real-world issues from the social media text. First, it contains actual comments in code mixed text posted by users on YouTube social media, rather than in monolingual text from the textbook. Second, it has been annotated for two tasks, namely sentiment analysis and offensive language detection for under-resourced Kannada language. Hence, KanCMD is meant to stimulate research in under-resourced Kannada language on real-world code-mixed social media text and multi-task learning. KanCMD was obtained by crawling the YouTube, and a minimum of three annotators annotates each comment. We release KanCMD 7,671 comments for multitask learning research purpose.",
}
```
```
@inproceedings{chakravarthi-etal-2020-corpus,
title = "Corpus Creation for Sentiment Analysis in Code-Mixed {T}amil-{E}nglish Text",
author = "Chakravarthi, Bharathi Raja and
Muralidaran, Vigneshwaran and
Priyadharshini, Ruba and
McCrae, John Philip",
booktitle = "Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL)",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources association",
url = "https://www.aclweb.org/anthology/2020.sltu-1.28",
pages = "202--210",
abstract = "Understanding the sentiment of a comment from a video or an image is an essential task in many applications. Sentiment analysis of a text can be useful for various decision-making processes. One such application is to analyse the popular sentiments of videos on social media based on viewer comments. However, comments from social media do not follow strict rules of grammar, and they contain mixing of more than one language, often written in non-native scripts. Non-availability of annotated code-mixed data for a low-resourced language like Tamil also adds difficulty to this problem. To overcome this, we created a gold standard Tamil-English code-switched, sentiment-annotated corpus containing 15,744 comment posts from YouTube. In this paper, we describe the process of creating the corpus and assigning polarities. We present inter-annotator agreement and show the results of sentiment analysis trained on this corpus as a benchmark.",
language = "English",
ISBN = "979-10-95546-35-1",
}
```
```
@inproceedings{chakravarthi-etal-2020-sentiment,
title = "A Sentiment Analysis Dataset for Code-Mixed {M}alayalam-{E}nglish",
author = "Chakravarthi, Bharathi Raja and
Jose, Navya and
Suryawanshi, Shardul and
Sherly, Elizabeth and
McCrae, John Philip",
booktitle = "Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL)",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources association",
url = "https://www.aclweb.org/anthology/2020.sltu-1.25",
pages = "177--184",
abstract = "There is an increasing demand for sentiment analysis of text from social media which are mostly code-mixed. Systems trained on monolingual data fail for code-mixed data due to the complexity of mixing at different levels of the text. However, very few resources are available for code-mixed data to create models specific for this data. Although much research in multilingual and cross-lingual sentiment analysis has used semi-supervised or unsupervised methods, supervised methods still performs better. Only a few datasets for popular languages such as English-Spanish, English-Hindi, and English-Chinese are available. There are no resources available for Malayalam-English code-mixed data. This paper presents a new gold standard corpus for sentiment analysis of code-mixed text in Malayalam-English annotated by voluntary annotators. This gold standard corpus obtained a Krippendorff{'}s alpha above 0.8 for the dataset. We use this new corpus to provide the benchmark for sentiment analysis in Malayalam-English code-mixed texts.",
language = "English",
ISBN = "979-10-95546-35-1",
}
```
### Contributions
Thanks to [@jamespaultg](https://github.com/jamespaultg) for adding this dataset. |
ofis_publik | ---
annotations_creators:
- found
language_creators:
- found
language:
- br
- fr
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: OfisPublik
dataset_info:
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- br
- fr
config_name: br-fr
splits:
- name: train
num_bytes: 12256825
num_examples: 63422
download_size: 3856983
dataset_size: 12256825
---
# Dataset Card for OfisPublik
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/OfisPublik.php
- **Repository:** None
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. |
ohsumed | ---
pretty_name: Ohsumed
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-label-classification
paperswithcode_id: null
dataset_info:
features:
- name: seq_id
dtype: int64
- name: medline_ui
dtype: int64
- name: mesh_terms
dtype: string
- name: title
dtype: string
- name: publication_type
dtype: string
- name: abstract
dtype: string
- name: author
dtype: string
- name: source
dtype: string
config_name: ohsumed
splits:
- name: train
num_bytes: 60117860
num_examples: 54709
- name: test
num_bytes: 338533901
num_examples: 293855
download_size: 139454017
dataset_size: 398651761
---
# Dataset Card for ohsumed
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://davis.wpi.edu/xmdv/datasets/ohsumed.html
- **Repository:** https://trec.nist.gov/data/filtering/t9.filtering.tar.gz
- **Paper:** https://link.springer.com/chapter/10.1007/978-1-4471-2099-5_20
- **Leaderboard:**
- **Point of Contact:** [William Hersh](mailto:hersh@OHSU.EDU) [Aakash Gupta](mailto:aakashg80@gmail.com)
### Dataset Summary
The OHSUMED test collection is a set of 348,566 references from
MEDLINE, the on-line medical information database, consisting of
titles and/or abstracts from 270 medical journals over a five-year
period (1987-1991). The available fields are title, abstract, MeSH
indexing terms, author, source, and publication type. The National
Library of Medicine has agreed to make the MEDLINE references in the
test database available for experimentation, restricted to the
following conditions:
1. The data will not be used in any non-experimental clinical,
library, or other setting.
2. Any human users of the data will explicitly be told that the data
is incomplete and out-of-date.
Please check this [readme](https://trec.nist.gov/data/filtering/README.t9.filtering) for more details
### Supported Tasks and Leaderboards
[Text Classification](https://paperswithcode.com/sota/text-classification-on-ohsumed)
### Languages
The text is primarily in English. The BCP 47 code is `en`
## Dataset Structure
### Data Instances
```
{'seq_id': 7770,
'medline_ui': 87120420,
'mesh_terms': 'Adult; Aged; Aneurysm/CO; Arteriovenous Fistula/*TH; Carotid Arteries; Case Report; Female; Human; Jugular Veins; Male; Methods; Middle Age; Neck/*BS; Vertebral Artery.',
'title': 'Arteriovenous fistulas of the large vessels of the neck: nonsurgical percutaneous occlusion.',
'publication_type': 'JOURNAL ARTICLE.',
'abstract': 'We describe the nonsurgical treatment of arteriovenous fistulas of the large vessels in the neck using three different means of endovascular occlusion of these large lesions, which are surgically difficult to approach and treat.',
'author': 'Vitek JJ; Keller FS.',
'source': 'South Med J 8705; 80(2):196-200'}
```
### Data Fields
Here are the field definitions:
- seg_id: sequential identifier
(important note: documents should be processed in this order)
- medline_ui: MEDLINE identifier (UI)
(<DOCNO> used for relevance judgements)
- mesh_terms: Human-assigned MeSH terms (MH)
- title: Title (TI)
- publication_type : Publication type (PT)
- abstract: Abstract (AB)
- author: Author (AU)
- source: Source (SO)
Note: some abstracts are truncated at 250 words and some references
have no abstracts at all (titles only). We do not have access to the
full text of the documents.
### Data Splits
The files are Train/ Test. Where the training has files from 1987 while the test files has abstracts from 1988-91
Total number of files:
Train: 54710
Test: 348567
## Dataset Creation
### Curation Rationale
The OHSUMED document collection was obtained by William Hersh
(hersh@OHSU.EDU) and colleagues for the experiments described in the
papers below. [Check citation](#citation-information)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
The test collection was built as part of a study assessing the use of
MEDLINE by physicians in a clinical setting (Hersh and Hickam, above).
Novice physicians using MEDLINE generated 106 queries. Only a subset
of these queries were used in the TREC-9 Filtering Track. Before
they searched, they were asked to provide a statement of information
about their patient as well as their information need.
The data was collected by William Hersh & colleagues
### Annotations
#### Annotation process
The existing OHSUMED topics describe actual information needs, but the
relevance judgements probably do not have the same coverage provided
by the TREC pooling process. The MeSH terms do not directly represent
information needs, rather they are controlled indexing terms. However,
the assessment should be more or less complete and there are a lot of
them, so this provides an unusual opportunity to work with a very
large topic sample.
The topic statements are provided in the standard TREC format
#### Who are the annotators?
Each query was replicated by four searchers, two physicians
experienced in searching and two medical librarians. The results were
assessed for relevance by a different group of physicians, using a
three point scale: definitely, possibly, or not relevant. The list of
documents explicitly judged to be not relevant is not provided here.
Over 10% of the query-document pairs were judged in duplicate to
assess inter-observer reliability. For evaluation, all documents
judged here as either possibly or definitely relevant were
considered relevant. TREC-9 systems were allowed to distinguish
between these two categories during the learning process if desired.
### Personal and Sensitive Information
No PII data is present in the train, test or query files.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
[Aakash Gupta](mailto:aakashg80@gmail.com)
*Th!nkEvolve Consulting* and Researcher at CoronaWhy
### Licensing Information
CC BY-NC 4.0
### Citation Information
Hersh WR, Buckley C, Leone TJ, Hickam DH, OHSUMED: An interactive
retrieval evaluation and new large test collection for research,
Proceedings of the 17th Annual ACM SIGIR Conference, 1994, 192-201.
Hersh WR, Hickam DH, Use of a multi-application computer workstation
in a clinical setting, Bulletin of the Medical Library Association,
1994, 82: 382-389.
### Contributions
Thanks to [@skyprince999](https://github.com/skyprince999) for adding this dataset. |
ollie | ---
annotations_creators:
- machine-generated
language_creators:
- crowdsourced
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
- 1M<n<10M
source_datasets:
- original
task_categories: []
task_ids: []
pretty_name: Ollie
configs:
- ollie_lemmagrep
- ollie_patterned
tags:
- relation-extraction
- text-to-structured
dataset_info:
- config_name: ollie_lemmagrep
features:
- name: arg1
dtype: string
- name: arg2
dtype: string
- name: rel
dtype: string
- name: search_query
dtype: string
- name: sentence
dtype: string
- name: words
dtype: string
- name: pos
dtype: string
- name: chunk
dtype: string
- name: sentence_cnt
dtype: string
splits:
- name: train
num_bytes: 12324648919
num_examples: 18674630
download_size: 1789363108
dataset_size: 12324648919
- config_name: ollie_patterned
features:
- name: rel
dtype: string
- name: arg1
dtype: string
- name: arg2
dtype: string
- name: slot0
dtype: string
- name: search_query
dtype: string
- name: pattern
dtype: string
- name: sentence
dtype: string
- name: parse
dtype: string
splits:
- name: train
num_bytes: 2930309084
num_examples: 3048961
download_size: 387514061
dataset_size: 2930309084
---
# Dataset Card for Ollie
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Ollie](https://knowitall.github.io/ollie/)
- **Repository:** [Github](https://github.com/knowitall/ollie)
- **Paper:** [Aclweb](https://www.aclweb.org/anthology/D12-1048/)
### Dataset Summary
The Ollie dataset includes two configs for the data
used to train the Ollie informatation extraction algorithm, for 18M
sentences and 3M sentences respectively.
This data is for academic use only. From the authors:
Ollie is a program that automatically identifies and extracts binary
relationships from English sentences. Ollie is designed for Web-scale
information extraction, where target relations are not specified in
advance.
Ollie is our second-generation information extraction system . Whereas
ReVerb operates on flat sequences of tokens, Ollie works with the
tree-like (graph with only small cycles) representation using
Stanford's compression of the dependencies. This allows Ollie to
capture expression that ReVerb misses, such as long-range relations.
Ollie also captures context that modifies a binary relation. Presently
Ollie handles attribution (He said/she believes) and enabling
conditions (if X then).
More information is available at the Ollie homepage:
https://knowitall.github.io/ollie/
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
en
## Dataset Structure
### Data Instances
There are two configurations for the dataset: ollie_lemmagrep which
are 18M sentences from web searches for a subset of the Reverb
relationships (110,000 relationships), and the 3M sentences for
ollie_patterned which is a subset of the ollie_lemmagrep dataset
derived from patterns according to the Ollie paper.
An example of an ollie_lemmagrep record:
``
{'arg1': 'adobe reader',
'arg2': 'pdf',
'chunk': 'B-NP I-NP I-NP I-NP B-PP B-NP I-NP B-VP B-PP B-NP I-NP O B-VP B-NP I-NP I-NP I-NP B-VP I-VP I-VP O',
'pos': 'JJ NNS CC NNS IN PRP$ NN VBP IN NNP NN CC VB DT NNP NNP NNP TO VB VBN .',
'rel': 'be require to view',
'search_query': 'require reader pdf adobe view',
'sentence': 'Many documents and reports on our site are in PDF format and require the Adobe Acrobat Reader to be viewed .',
'sentence_cnt': '9',
'words': 'many,document,and,report,on,our,site,be,in,pdf,format,and,require,the,adobe,acrobat,reader,to,be,view'}
``
An example of an ollie_patterned record:
``
{'arg1': 'english',
'arg2': 'internet',
'parse': '(in_IN_6), advmod(important_JJ_4, most_RBS_3); nsubj(language_NN_5, English_NNP_0); cop(language_NN_5, being_VBG_1); det(language_NN_5, the_DT_2); amod(language_NN_5, important_JJ_4); prep_in(language_NN_5, era_NN_9); punct(language_NN_5, ,_,_10); conj(language_NN_5, education_NN_12); det(era_NN_9, the_DT_7); nn(era_NN_9, Internet_NNP_8); amod(education_NN_12, English_JJ_11); nsubjpass(enriched_VBN_15, language_NN_5); aux(enriched_VBN_15, should_MD_13); auxpass(enriched_VBN_15, be_VB_14); punct(enriched_VBN_15, ._._16)',
'pattern': '{arg1} <nsubj< {rel:NN} >prep_in> {slot0:NN} >nn> {arg2}',
'rel': 'be language of',
'search_query': 'english language internet',
'sentence': 'English being the most important language in the Internet era , English education should be enriched .',
'slot0': 'era'}
``
### Data Fields
For ollie_lemmagrep:
* rel: the relationship phrase/verb phrase. This may be empty, which represents the "be" relationship.
* arg1: the first argument in the relationship
* arg2: the second argument in the relationship.
* chunk: a tag of each token in the sentence, showing the pos chunks
* pos: part of speech tagging of the sentence
* sentence: the sentence
* sentence_cnt: the number of copies of this sentence encountered
* search_query: a combintion of rel, arg1, arg2
* words: the lemma of the words of the sentence separated by commas
For ollie_patterned:
* rel: the relationship phrase/verb phrase.
* arg1: the first argument in the relationship
* arg2: the second argument in the relationship.
* slot0: the third argument in the relationship, which might be empty.
* pattern: a parse pattern for the relationship
* parse: a dependency parse forthe sentence
* search_query: a combintion of rel, arg1, arg2
* sentence: the senence
### Data Splits
There are no splits.
## Dataset Creation
### Curation Rationale
This dataset was created as part of research on open information extraction.
### Source Data
#### Initial Data Collection and Normalization
See the research paper on OLlie. The training data is extracted from web pages (Cluebweb09).
#### Who are the source language producers?
The Ollie authors at the Univeristy of Washington and data from Cluebweb09 and the open web.
### Annotations
#### Annotation process
The various parsers and code from the Ollie alogrithm.
#### Who are the annotators?
Machine annotated.
### Personal and Sensitive Information
Unkown, but likely there are names of famous individuals.
## Considerations for Using the Data
### Social Impact of Dataset
The goal for the work is to help machines learn to extract information form open domains.
### Discussion of Biases
Since the data is gathered from the web, there is likely to be biased text and relationships.
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The authors of Ollie at The University of Washington
### Licensing Information
The University of Washington academic license: https://raw.githubusercontent.com/knowitall/ollie/master/LICENSE
### Citation Information
```
@inproceedings{ollie-emnlp12,
author = {Mausam and Michael Schmitz and Robert Bart and Stephen Soderland and Oren Etzioni},
title = {Open Language Learning for Information Extraction},
booktitle = {Proceedings of Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CONLL)},
year = {2012}
}
```
### Contributions
Thanks to [@ontocord](https://github.com/ontocord) for adding this dataset. |
omp | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- de
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: one-million-posts-corpus
pretty_name: One Million Posts
dataset_info:
- config_name: posts_labeled
features:
- name: ID_Post
dtype: string
- name: ID_Parent_Post
dtype: string
- name: ID_Article
dtype: string
- name: ID_User
dtype: string
- name: CreatedAt
dtype: string
- name: Status
dtype: string
- name: Headline
dtype: string
- name: Body
dtype: string
- name: PositiveVotes
dtype: int32
- name: NegativeVotes
dtype: int32
- name: Category
dtype:
class_label:
names:
'0': ArgumentsUsed
'1': Discriminating
'2': Inappropriate
'3': OffTopic
'4': PersonalStories
'5': PossiblyFeedback
'6': SentimentNegative
'7': SentimentNeutral
'8': SentimentPositive
- name: Value
dtype: int32
- name: Fold
dtype: int32
splits:
- name: train
num_bytes: 13955964
num_examples: 40567
download_size: 1329892
dataset_size: 13955964
- config_name: posts_unlabeled
features:
- name: ID_Post
dtype: string
- name: ID_Parent_Post
dtype: string
- name: ID_Article
dtype: string
- name: ID_User
dtype: string
- name: CreatedAt
dtype: string
- name: Status
dtype: string
- name: Headline
dtype: string
- name: Body
dtype: string
- name: PositiveVotes
dtype: int32
- name: NegativeVotes
dtype: int32
splits:
- name: train
num_bytes: 305770324
num_examples: 1000000
download_size: 79296188
dataset_size: 305770324
- config_name: articles
features:
- name: ID_Article
dtype: string
- name: Path
dtype: string
- name: publishingDate
dtype: string
- name: Title
dtype: string
- name: Body
dtype: string
splits:
- name: train
num_bytes: 43529400
num_examples: 12087
download_size: 10681288
dataset_size: 43529400
---
# Dataset Card for One Million Posts Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://ofai.github.io/million-post-corpus/
- **Repository:** https://github.com/OFAI/million-post-corpus
- **Paper:** https://dl.acm.org/doi/10.1145/3077136.3080711
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The “One Million Posts” corpus is an annotated data set consisting of user comments posted to an Austrian newspaper website (in German language).
DER STANDARD is an Austrian daily broadsheet newspaper. On the newspaper’s website, there is a discussion section below each news article where readers engage in online discussions. The data set contains a selection of user posts from the 12 month time span from 2015-06-01 to 2016-05-31. There are 11,773 labeled and 1,000,000 unlabeled posts in the data set. The labeled posts were annotated by professional forum moderators employed by the newspaper.
The data set contains the following data for each post:
* Post ID
* Article ID
* Headline (max. 250 characters)
* Main Body (max. 750 characters)
* User ID (the user names used by the website have been re-mapped to new numeric IDs)
* Time stamp
* Parent post (replies give rise to tree-like discussion thread structures)
* Status (online or deleted by a moderator)
* Number of positive votes by other community members
* Number of negative votes by other community members
For each article, the data set contains the following data:
* Article ID
* Publishing date
* Topic Path (e.g.: Newsroom / Sports / Motorsports / Formula 1)
* Title
* Body
Detailed descriptions of the post selection and annotation procedures are given in the paper.
#### Annotated Categories
Potentially undesirable content:
* Sentiment (negative/neutral/positive)
An important goal is to detect changes in the prevalent sentiment in a discussion, e.g., the location within the fora and the point in time where a turn from positive/neutral sentiment to negative sentiment takes place.
* Off-Topic (yes/no)
Posts which digress too far from the topic of the corresponding article.
* Inappropriate (yes/no)
Swearwords, suggestive and obscene language, insults, threats etc.
* Discriminating (yes/no)
Racist, sexist, misogynistic, homophobic, antisemitic and other misanthropic content.
Neutral content that requires a reaction:
* Feedback (yes/no)
Sometimes users ask questions or give feedback to the author of the article or the newspaper in general, which may require a reply/reaction.
Potentially desirable content:
* Personal Stories (yes/no)
In certain fora, users are encouraged to share their personal stories, experiences, anecdotes etc. regarding the respective topic.
* Arguments Used (yes/no)
It is desirable for users to back their statements with rational argumentation, reasoning and sources.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Austrian German
## Dataset Structure
### Data Instances
An example from the `posts_labeled` config:
```json
{
"ID_Post": "79",
"ID_Parent_Post": "",
"ID_Article": "1",
"ID_User": "12071",
"CreatedAt": "2015-06-01 08:58:32.363",
"Status": "online",
"Headline": "",
"Body": "ich kann keinen hinweis finden, wo man sich hinwenden muss, sollte man als abonnent des standard, die zeitung nicht bekommt, ist dass bewusst so arrangiert?",
"PositiveVotes": 0,
"NegativeVotes": 0,
"Category": 5,
"Value": 1,
"Fold": 1
}
```
An example from the `posts_unlabeled` config:
```json
{
"ID_Post": "51",
"ID_Parent_Post": "",
"ID_Article": "1",
"ID_User": "11125",
"CreatedAt": "2011-05-15 08:37:11.313",
"Status": "online",
"Headline": "Ich würde es sehr begrüßen, wenn",
"Body": "Antworten erst beim Erscheinen als e-Mail dem Poster zugestellt würden.\r\n\r\nEs gibt User, die ihre Kommentare sofort nach Mail-Eingang irgendwo hinposten. Dadurch wird \r\n1. vor allem für andere Unser die Lesbarkeit wesentlich beeinträchtigt,\r\n2. kann das Post verdreht wiedergegeben werden,\r\n3. man ist immer wieder gezwungen die Antwort richtig zu stellen.\r\n\r\nPrivatfehden von Usern sollten, wenn schon zugelassen, für alle User nachvollziehbar sein.\r\n\r\nDanke!",
"PositiveVotes": 1,
"NegativeVotes": 0
}
```
An example from the `articles` config:
```json
{
"ID_Article": "41",
"Path": "Newsroom/Wirtschaft/Wirtschaftpolitik/Energiemarkt",
"publishingDate": "2015-06-01 12:39:35.00",
"Title": "Öl- und Gas-Riesen fordern weltweite CO2-Preise",
"Body": '<div class="section" id="content-main" itemprop="articleBody"><div class="copytext"><h2 itemprop="description">Brief von BP, Total, Shell, Statoil, BG Group und Eni unterzeichnet</h2><p>Paris/London/La Defense - Sechs große Öl- und Gaskonzerne haben mit Blick auf die Verhandlungen über einen neuen Welt-Klimavertrag ein globales Preissystem für CO2-Emissionen gefordert. Wenn der Ausstoß von CO2 Geld kostet, sei dies ein Anreiz für die Nutzung von Erdgas statt Kohle, mehr Energieeffizienz und Investitionen zur Vermeidung des Treibhausgases, heißt es in einem am Montag veröffentlichten Brief.</p>\n<p>Das Schreiben ist unterzeichnet von BP, Total, Shell, Statoil, BG Group und Eni. Die Unternehmen versicherten, sie seien bereit, ihren Teil zum Kampf gegen den <a href="/r1937/Klimawandel">Klimawandel</a> beizutragen. Dafür sei aber ein klarer und verlässlicher Politik-Rahmen nötig. (APA, 1.6.2015)</p> </div></div>'
}
```
### Data Fields
The data set contains the following data for each post:
* **ID_Post**: Post ID
* **ID_Parent_Post**: Parent post (replies give rise to tree-like discussion thread structures)
* **ID_Article**: Article ID
* **ID_User**: User ID (the user names used by the website have been re-mapped to new numeric IDs)
* **Headline**: Headline (max. 250 characters)
* **Body**: Main Body (max. 750 characters)
* **CreatedAt**: Time stamp
* **Status**: Status (online or deleted by a moderator)
* **PositiveVotes**: Number of positive votes by other community members
* **NegativeVotes**: Number of negative votes by other community members
Labeled posts also contain:
* **Category**: The category of the annotation, one of: ArgumentsUsed, Discriminating, Inappropriate, OffTopic, PersonalStories, PossiblyFeedback, SentimentNegative, SentimentNeutral, SentimentPositive
* **Value**: either 0 or 1, explicitly indicating whether or not the post has the specified category as a label (i.e. a category of `ArgumentsUsed` with value of `0` means that an annotator explicitly labeled that this post doesn't use arguments, as opposed to the mere absence of a positive label).
* **Fold**: a number between [0-9] from a 10-fold split by the authors
For each article, the data set contains the following data:
* **ID_Article**: Article ID
* **publishingDate**: Publishing date
* **Path**: Topic Path (e.g.: Newsroom / Sports / Motorsports / Formula 1)
* **Title**: Title
* **Body**: Body
### Data Splits
Training split only.
| name | train |
|-----------------|--------:|
| posts_labeled | 40567 |
| posts_unlabeled | 1000000 |
| articles | 12087 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
This data set is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
### Citation Information
```
@InProceedings{Schabus2018,
author = {Dietmar Schabus and Marcin Skowron},
title = {Academic-Industrial Perspective on the Development and Deployment of a Moderation System for a Newspaper Website},
booktitle = {Proceedings of the 11th International Conference on Language Resources and Evaluation (LREC)},
year = {2018},
address = {Miyazaki, Japan},
month = may,
pages = {1602-1605},
abstract = {This paper describes an approach and our experiences from the development, deployment and usability testing of a Natural Language Processing (NLP) and Information Retrieval system that supports the moderation of user comments on a large newspaper website. We highlight some of the differences between industry-oriented and academic research settings and their influence on the decisions made in the data collection and annotation processes, selection of document representation and machine learning methods. We report on classification results, where the problems to solve and the data to work with come from a commercial enterprise. In this context typical for NLP research, we discuss relevant industrial aspects. We believe that the challenges faced as well as the solutions proposed for addressing them can provide insights to others working in a similar setting.},
url = {http://www.lrec-conf.org/proceedings/lrec2018/summaries/8885.html},
}
```
### Contributions
Thanks to [@aseifert](https://github.com/aseifert) for adding this dataset. |
onestop_english | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text2text-generation
- text-classification
task_ids:
- multi-class-classification
- text-simplification
paperswithcode_id: onestopenglish
pretty_name: OneStopEnglish corpus
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': ele
'1': int
'2': adv
splits:
- name: train
num_bytes: 2278043
num_examples: 567
download_size: 1228804
dataset_size: 2278043
---
# Dataset Card for OneStopEnglish corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/nishkalavallabhi/OneStopEnglishCorpus
- **Repository:** https://github.com/purvimisal/OneStopCorpus-Compiled/raw/main/Texts-SeparatedByReadingLevel.zip
- **Paper:** https://www.aclweb.org/anthology/W18-0535.pdf
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
OneStopEnglish is a corpus of texts written at three reading levels, and demonstrates its usefulness for through two applications - automatic readability assessment and automatic text simplification.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
An instance example:
```
{
"text": "When you see the word Amazon, what’s the first thing you think...",
"label": 0
}
```
Note that each instance contains the full text of the document.
### Data Fields
- `text`: Full document text.
- `label`: Reading level of the document- ele/int/adv (Elementary/Intermediate/Advance).
### Data Splits
The OneStopEnglish dataset has a single _train_ split.
| Split | Number of instances |
|-------|--------------------:|
| train | 567 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Creative Commons Attribution-ShareAlike 4.0 International License
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@purvimisal](https://github.com/purvimisal) for adding this dataset. |
onestop_qa | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
- extended|onestop_english
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
paperswithcode_id: onestopqa
pretty_name: OneStopQA
language_bcp47:
- en-US
dataset_info:
features:
- name: title
dtype: string
- name: paragraph
dtype: string
- name: level
dtype:
class_label:
names:
'0': Adv
'1': Int
'2': Ele
- name: question
dtype: string
- name: paragraph_index
dtype: int32
- name: answers
sequence: string
length: 4
- name: a_span
sequence: int32
- name: d_span
sequence: int32
splits:
- name: train
num_bytes: 1423090
num_examples: 1458
download_size: 118173
dataset_size: 1423090
---
# Dataset Card for OneStopQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [OneStopQA repository](https://github.com/berzak/onestop-qa)
- **Repository:** [OneStopQA repository](https://github.com/berzak/onestop-qa)
- **Paper:** [STARC: Structured Annotations for Reading Comprehension](https://arxiv.org/abs/2004.14797)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
OneStopQA is a multiple choice reading comprehension dataset annotated according to the STARC (Structured Annotations for Reading Comprehension) scheme. The reading materials are Guardian articles taken from the [OneStopEnglish corpus](https://github.com/nishkalavallabhi/OneStopEnglishCorpus). Each article comes in three difficulty levels, Elementary, Intermediate and Advanced. Each paragraph is annotated with three multiple choice reading comprehension questions. The reading comprehension questions can be answered based on any of the three paragraph levels.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English (`en-US`).
The original Guardian articles were manually converted from British to American English.
## Dataset Structure
### Data Instances
An example of instance looks as follows.
```json
{
"title": "101-Year-Old Bottle Message",
"paragraph": "Angela Erdmann never knew her grandfather. He died in 1946, six years before she was born. But, on Tuesday 8th April, 2014, she described the extraordinary moment when she received a message in a bottle, 101 years after he had lobbed it into the Baltic Sea. Thought to be the world’s oldest message in a bottle, it was presented to Erdmann by the museum that is now exhibiting it in Germany.",
"paragraph_index": 1,
"level": "Adv",
"question": "How did Angela Erdmann find out about the bottle?",
"answers": ["A museum told her that they had it",
"She coincidentally saw it at the museum where it was held",
"She found it in her basement on April 28th, 2014",
"A friend told her about it"],
"a_span": [56, 70],
"d_span": [16, 34]
}
```
Where,
| Answer | Description | Textual Span |
|--------|------------------------------------------------------------|-----------------|
| a | Correct answer. | Critical Span |
| b | Incorrect answer. A miscomprehension of the critical span. | Critical Span |
| c | Incorrect answer. Refers to an additional span. | Distractor Span |
| d | Incorrect answer. Has no textual support. | - |
The order of the answers in the `answers` list corresponds to the order of the answers in the table.
### Data Fields
- `title`: A `string` feature. The article title.
- `paragraph`: A `string` feature. The paragraph from the article.
- `paragraph_index`: An `int` feature. Corresponds to the paragraph index in the article.
- `question`: A `string` feature. The given question.
- `answers`: A list of `string` feature containing the four possible answers.
- `a_span`: A list of start and end indices (inclusive) of the critical span.
- `d_span`: A list of start and end indices (inclusive) of the distractor span.
*Span indices are according to word positions after whitespace tokenization.
**In the rare case where a span is spread over multiple sections,
the span list will contain multiple instances of start and stop indices in the format:
[start_1, stop_1, start_2, stop_2,...].
### Data Splits
Articles: 30
Paragraphs: 162
Questions: 486
Question-Paragraph Level pairs: 1,458
No preconfigured split is currently provided.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
The annotation and piloting process of the dataset is described in Appendix A in
[STARC: Structured Annotations for Reading Comprehension](https://aclanthology.org/2020.acl-main.507.pdf).
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>.
### Citation Information
[STARC: Structured Annotations for Reading Comprehension](http://people.csail.mit.edu/berzak/papers/acl2020.pdf)
```
@inproceedings{starc2020,
author = {Berzak, Yevgeni and Malmaud, Jonathan and Levy, Roger},
title = {STARC: Structured Annotations for Reading Comprehension},
booktitle = {ACL},
year = {2020},
publisher = {Association for Computational Linguistics}
}
```
### Contributions
Thanks to [@scaperex](https://github.com/scaperex) for adding this dataset. |
open_subtitles | ---
annotations_creators:
- found
language_creators:
- found
language:
- af
- ar
- bg
- bn
- br
- bs
- ca
- cs
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- gl
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- ka
- kk
- ko
- lt
- lv
- mk
- ml
- ms
- nl
- 'no'
- pl
- pt
- ro
- ru
- si
- sk
- sl
- sq
- sr
- sv
- ta
- te
- th
- tl
- tr
- uk
- ur
- vi
- zh
language_bcp47:
- pt-BR
- ze-EN
- ze-ZH
- zh-CN
- zh-TW
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
- 1M<n<10M
- n<1K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: opensubtitles
pretty_name: OpenSubtitles
configs:
- bn-is
- bs-eo
- da-ru
- en-hi
- fr-hy
dataset_info:
- config_name: bs-eo
features:
- name: id
dtype: string
- name: meta
struct:
- name: year
dtype: uint32
- name: imdbId
dtype: uint32
- name: subtitleId
struct:
- name: bs
dtype: uint32
- name: eo
dtype: uint32
- name: sentenceIds
struct:
- name: bs
sequence: uint32
- name: eo
sequence: uint32
- name: translation
dtype:
translation:
languages:
- bs
- eo
splits:
- name: train
num_bytes: 1204266
num_examples: 10989
download_size: 333050
dataset_size: 1204266
- config_name: fr-hy
features:
- name: id
dtype: string
- name: meta
struct:
- name: year
dtype: uint32
- name: imdbId
dtype: uint32
- name: subtitleId
struct:
- name: fr
dtype: uint32
- name: hy
dtype: uint32
- name: sentenceIds
struct:
- name: fr
sequence: uint32
- name: hy
sequence: uint32
- name: translation
dtype:
translation:
languages:
- fr
- hy
splits:
- name: train
num_bytes: 132450
num_examples: 668
download_size: 41861
dataset_size: 132450
- config_name: da-ru
features:
- name: id
dtype: string
- name: meta
struct:
- name: year
dtype: uint32
- name: imdbId
dtype: uint32
- name: subtitleId
struct:
- name: da
dtype: uint32
- name: ru
dtype: uint32
- name: sentenceIds
struct:
- name: da
sequence: uint32
- name: ru
sequence: uint32
- name: translation
dtype:
translation:
languages:
- da
- ru
splits:
- name: train
num_bytes: 1082649105
num_examples: 7543012
download_size: 267995167
dataset_size: 1082649105
- config_name: en-hi
features:
- name: id
dtype: string
- name: meta
struct:
- name: year
dtype: uint32
- name: imdbId
dtype: uint32
- name: subtitleId
struct:
- name: en
dtype: uint32
- name: hi
dtype: uint32
- name: sentenceIds
struct:
- name: en
sequence: uint32
- name: hi
sequence: uint32
- name: translation
dtype:
translation:
languages:
- en
- hi
splits:
- name: train
num_bytes: 13845544
num_examples: 93016
download_size: 2967295
dataset_size: 13845544
- config_name: bn-is
features:
- name: id
dtype: string
- name: meta
struct:
- name: year
dtype: uint32
- name: imdbId
dtype: uint32
- name: subtitleId
struct:
- name: bn
dtype: uint32
- name: is
dtype: uint32
- name: sentenceIds
struct:
- name: bn
sequence: uint32
- name: is
sequence: uint32
- name: translation
dtype:
translation:
languages:
- bn
- is
splits:
- name: train
num_bytes: 6371251
num_examples: 38272
download_size: 1411625
dataset_size: 6371251
---
# Dataset Card for OpenSubtitles
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/OpenSubtitles.php
- **Repository:** None
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2016/pdf/62_Paper.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.
You can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/OpenSubtitles.php
E.g.
`dataset = load_dataset("open_subtitles", lang1="fi", lang2="hi")`
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The languages in the dataset are:
- af
- ar
- bg
- bn
- br
- bs
- ca
- cs
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- gl
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- ka
- kk
- ko
- lt
- lv
- mk
- ml
- ms
- nl
- no
- pl
- pt
- pt_br: Portuguese (Brazil) (pt-BR)
- ro
- ru
- si
- sk
- sl
- sq
- sr
- sv
- ta
- te
- th
- tl
- tr
- uk
- ur
- vi
- ze_en: English constituent of Bilingual Chinese-English (subtitles displaying two languages at once, one per line)
- ze_zh: Chinese constituent of Bilingual Chinese-English (subtitles displaying two languages at once, one per line)
- zh_cn: Simplified Chinese (zh-CN, `zh-Hans`)
- zh_tw: Traditional Chinese (zh-TW, `zh-Hant`)
## Dataset Structure
### Data Instances
Here are some examples of questions and facts:
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. |
openai_humaneval | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
pretty_name: OpenAI HumanEval
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
tags:
- code-generation
paperswithcode_id: humaneval
dataset_info:
features:
- name: task_id
dtype: string
- name: prompt
dtype: string
- name: canonical_solution
dtype: string
- name: test
dtype: string
- name: entry_point
dtype: string
config_name: openai_humaneval
splits:
- name: test
num_bytes: 194414
num_examples: 164
download_size: 44877
dataset_size: 194414
---
# Dataset Card for OpenAI HumanEval
## Table of Contents
- [OpenAI HumanEval](#openai-humaneval)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [GitHub Repository](https://github.com/openai/human-eval)
- **Paper:** [Evaluating Large Language Models Trained on Code](https://arxiv.org/abs/2107.03374)
### Dataset Summary
The HumanEval dataset released by OpenAI includes 164 programming problems with a function sig- nature, docstring, body, and several unit tests. They were handwritten to ensure not to be included in the training set of code generation models.
### Supported Tasks and Leaderboards
### Languages
The programming problems are written in Python and contain English natural text in comments and docstrings.
## Dataset Structure
```python
from datasets import load_dataset
load_dataset("openai_humaneval")
DatasetDict({
test: Dataset({
features: ['task_id', 'prompt', 'canonical_solution', 'test', 'entry_point'],
num_rows: 164
})
})
```
### Data Instances
An example of a dataset instance:
```
{
"task_id": "test/0",
"prompt": "def return1():\n",
"canonical_solution": " return 1",
"test": "def check(candidate):\n assert candidate() == 1",
"entry_point": "return1"
}
```
### Data Fields
- `task_id`: identifier for the data sample
- `prompt`: input for the model containing function header and docstrings
- `canonical_solution`: solution for the problem in the `prompt`
- `test`: contains function to test generated code for correctness
- `entry_point`: entry point for test
### Data Splits
The dataset only consists of a test split with 164 samples.
## Dataset Creation
### Curation Rationale
Since code generation models are often trained on dumps of GitHub a dataset not included in the dump was necessary to properly evaluate the model. However, since this dataset was published on GitHub it is likely to be included in future dumps.
### Source Data
The dataset was handcrafted by engineers and researchers at OpenAI.
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
None.
## Considerations for Using the Data
Make sure you execute generated Python code in a safe environment when evauating against this dataset as generated code could be harmful.
### Social Impact of Dataset
With this dataset code generating models can be better evaluated which leads to fewer issues introduced when using such models.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
OpenAI
### Licensing Information
MIT License
### Citation Information
```
@misc{chen2021evaluating,
title={Evaluating Large Language Models Trained on Code},
author={Mark Chen and Jerry Tworek and Heewoo Jun and Qiming Yuan and Henrique Ponde de Oliveira Pinto and Jared Kaplan and Harri Edwards and Yuri Burda and Nicholas Joseph and Greg Brockman and Alex Ray and Raul Puri and Gretchen Krueger and Michael Petrov and Heidy Khlaaf and Girish Sastry and Pamela Mishkin and Brooke Chan and Scott Gray and Nick Ryder and Mikhail Pavlov and Alethea Power and Lukasz Kaiser and Mohammad Bavarian and Clemens Winter and Philippe Tillet and Felipe Petroski Such and Dave Cummings and Matthias Plappert and Fotios Chantzis and Elizabeth Barnes and Ariel Herbert-Voss and William Hebgen Guss and Alex Nichol and Alex Paino and Nikolas Tezak and Jie Tang and Igor Babuschkin and Suchir Balaji and Shantanu Jain and William Saunders and Christopher Hesse and Andrew N. Carr and Jan Leike and Josh Achiam and Vedant Misra and Evan Morikawa and Alec Radford and Matthew Knight and Miles Brundage and Mira Murati and Katie Mayer and Peter Welinder and Bob McGrew and Dario Amodei and Sam McCandlish and Ilya Sutskever and Wojciech Zaremba},
year={2021},
eprint={2107.03374},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
### Contributions
Thanks to [@lvwerra](https://github.com/lvwerra) for adding this dataset. |
openbookqa | ---
annotations_creators:
- crowdsourced
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
pretty_name: OpenBookQA
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- open-domain-qa
paperswithcode_id: openbookqa
dataset_info:
- config_name: main
features:
- name: id
dtype: string
- name: question_stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: answerKey
dtype: string
splits:
- name: train
num_bytes: 896034
num_examples: 4957
- name: validation
num_bytes: 95519
num_examples: 500
- name: test
num_bytes: 91850
num_examples: 500
download_size: 1446098
dataset_size: 1083403
- config_name: additional
features:
- name: id
dtype: string
- name: question_stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: answerKey
dtype: string
- name: fact1
dtype: string
- name: humanScore
dtype: float32
- name: clarity
dtype: float32
- name: turkIdAnonymized
dtype: string
splits:
- name: train
num_bytes: 1290473
num_examples: 4957
- name: validation
num_bytes: 136141
num_examples: 500
- name: test
num_bytes: 130926
num_examples: 500
download_size: 1446098
dataset_size: 1557540
---
# Dataset Card for OpenBookQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://allenai.org/data/open-book-qa](https://allenai.org/data/open-book-qa)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 2.89 MB
- **Size of the generated dataset:** 2.88 MB
- **Total amount of disk used:** 5.78 MB
### Dataset Summary
OpenBookQA aims to promote research in advanced question-answering, probing a deeper understanding of both the topic
(with salient facts summarized as an open book, also provided with the dataset) and the language it is expressed in. In
particular, it contains questions that require multi-step reasoning, use of additional common and commonsense knowledge,
and rich text comprehension.
OpenBookQA is a new kind of question-answering dataset modeled after open book exams for assessing human understanding of
a subject.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### main
- **Size of downloaded dataset files:** 1.45 MB
- **Size of the generated dataset:** 1.45 MB
- **Total amount of disk used:** 2.88 MB
An example of 'train' looks as follows:
```
{'id': '7-980',
'question_stem': 'The sun is responsible for',
'choices': {'text': ['puppies learning new tricks',
'children growing up and getting old',
'flowers wilting in a vase',
'plants sprouting, blooming and wilting'],
'label': ['A', 'B', 'C', 'D']},
'answerKey': 'D'}
```
#### additional
- **Size of downloaded dataset files:** 1.45 MB
- **Size of the generated dataset:** 1.45 MB
- **Total amount of disk used:** 2.88 MB
An example of 'train' looks as follows:
```
{'id': '7-980',
'question_stem': 'The sun is responsible for',
'choices': {'text': ['puppies learning new tricks',
'children growing up and getting old',
'flowers wilting in a vase',
'plants sprouting, blooming and wilting'],
'label': ['A', 'B', 'C', 'D']},
'answerKey': 'D',
'fact1': 'the sun is the source of energy for physical cycles on Earth',
'humanScore': 1.0,
'clarity': 2.0,
'turkIdAnonymized': 'b356d338b7'}
```
### Data Fields
The data fields are the same among all splits.
#### main
- `id`: a `string` feature.
- `question_stem`: a `string` feature.
- `choices`: a dictionary feature containing:
- `text`: a `string` feature.
- `label`: a `string` feature.
- `answerKey`: a `string` feature.
#### additional
- `id`: a `string` feature.
- `question_stem`: a `string` feature.
- `choices`: a dictionary feature containing:
- `text`: a `string` feature.
- `label`: a `string` feature.
- `answerKey`: a `string` feature.
- `fact1` (`str`): oOriginating common knowledge core fact associated to the question.
- `humanScore` (`float`): Human accuracy score.
- `clarity` (`float`): Clarity score.
- `turkIdAnonymized` (`str`): Anonymized crowd-worker ID.
### Data Splits
| name | train | validation | test |
|------------|------:|-----------:|-----:|
| main | 4957 | 500 | 500 |
| additional | 4957 | 500 | 500 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{OpenBookQA2018,
title={Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering},
author={Todor Mihaylov and Peter Clark and Tushar Khot and Ashish Sabharwal},
booktitle={EMNLP},
year={2018}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset. |
openslr | ---
pretty_name: OpenSLR
annotations_creators:
- found
language_creators:
- found
language:
- af
- bn
- ca
- en
- es
- eu
- gl
- gu
- jv
- km
- kn
- ml
- mr
- my
- ne
- si
- st
- su
- ta
- te
- tn
- ve
- xh
- yo
language_bcp47:
- en-GB
- en-IE
- en-NG
- es-CL
- es-CO
- es-PE
- es-PR
license:
- cc-by-sa-4.0
multilinguality:
- multilingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- automatic-speech-recognition
task_ids: []
paperswithcode_id: null
configs:
- SLR32
- SLR35
- SLR36
- SLR41
- SLR42
- SLR43
- SLR44
- SLR52
- SLR53
- SLR54
- SLR63
- SLR64
- SLR65
- SLR66
- SLR69
- SLR70
- SLR71
- SLR72
- SLR73
- SLR74
- SLR75
- SLR76
- SLR77
- SLR78
- SLR79
- SLR80
- SLR83
- SLR86
dataset_info:
- config_name: SLR41
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 2423902
num_examples: 5822
download_size: 1890792360
dataset_size: 2423902
- config_name: SLR42
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1427984
num_examples: 2906
download_size: 866086951
dataset_size: 1427984
- config_name: SLR43
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1074005
num_examples: 2064
download_size: 800375645
dataset_size: 1074005
- config_name: SLR44
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1776827
num_examples: 4213
download_size: 1472252752
dataset_size: 1776827
- config_name: SLR63
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 2016587
num_examples: 4126
download_size: 1345876299
dataset_size: 2016587
- config_name: SLR64
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 810375
num_examples: 1569
download_size: 712155683
dataset_size: 810375
- config_name: SLR65
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 2136447
num_examples: 4284
download_size: 1373304655
dataset_size: 2136447
- config_name: SLR66
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1898335
num_examples: 4448
download_size: 1035127870
dataset_size: 1898335
- config_name: SLR69
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1647263
num_examples: 4240
download_size: 1848659543
dataset_size: 1647263
- config_name: SLR35
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 73565374
num_examples: 185076
download_size: 18900105726
dataset_size: 73565374
- config_name: SLR36
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 88942337
num_examples: 219156
download_size: 22996553929
dataset_size: 88942337
- config_name: SLR70
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1339608
num_examples: 3359
download_size: 1213955196
dataset_size: 1339608
- config_name: SLR71
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1676273
num_examples: 4374
download_size: 1445365903
dataset_size: 1676273
- config_name: SLR72
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1876301
num_examples: 4903
download_size: 1612030532
dataset_size: 1876301
- config_name: SLR73
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 2084052
num_examples: 5447
download_size: 1940306814
dataset_size: 2084052
- config_name: SLR74
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 237395
num_examples: 617
download_size: 214181314
dataset_size: 237395
- config_name: SLR75
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1286937
num_examples: 3357
download_size: 1043317004
dataset_size: 1286937
- config_name: SLR76
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 2756507
num_examples: 7136
download_size: 3041125513
dataset_size: 2756507
- config_name: SLR77
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 2217652
num_examples: 5587
download_size: 2207991775
dataset_size: 2217652
- config_name: SLR78
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 2121986
num_examples: 4272
download_size: 1743222102
dataset_size: 2121986
- config_name: SLR79
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 2176539
num_examples: 4400
download_size: 1820919115
dataset_size: 2176539
- config_name: SLR80
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1308651
num_examples: 2530
download_size: 948181015
dataset_size: 1308651
- config_name: SLR86
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 1378801
num_examples: 3583
download_size: 907065562
dataset_size: 1378801
- config_name: SLR32
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 4544052380
num_examples: 9821
download_size: 3312884763
dataset_size: 4544052380
- config_name: SLR52
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 77369899
num_examples: 185293
download_size: 14676484074
dataset_size: 77369899
- config_name: SLR53
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 88073248
num_examples: 218703
download_size: 14630810921
dataset_size: 88073248
- config_name: SLR54
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 62735822
num_examples: 157905
download_size: 9328247362
dataset_size: 62735822
- config_name: SLR83
features:
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 7098985
num_examples: 17877
download_size: 7229890819
dataset_size: 7098985
---
# Dataset Card for openslr
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.openslr.org/
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
OpenSLR is a site devoted to hosting speech and language resources, such as training corpora for speech recognition,
and software related to speech recognition. Currently, following resources are available:
#### SLR32: High quality TTS data for four South African languages (af, st, tn, xh).
This data set contains multi-speaker high quality transcribed audio data for four languages of South Africa.
The data set consists of wave files, and a TSV file transcribing the audio. In each folder, the file line_index.tsv
contains a FileID, which in turn contains the UserID and the Transcription of audio in the file.
The data set has had some quality checks, but there might still be errors.
This data set was collected by as a collaboration between North West University and Google.
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See https://github.com/google/language-resources#license for license information.
Copyright 2017 Google, Inc.
#### SLR35: Large Javanese ASR training data set.
This data set contains transcribed audio data for Javanese (~185K utterances). The data set consists of wave files,
and a TSV file. The file utt_spk_text.tsv contains a FileID, UserID and the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
This dataset was collected by Google in collaboration with Reykjavik University and Universitas Gadjah Mada
in Indonesia.
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/35/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2016, 2017 Google, Inc.
#### SLR36: Large Sundanese ASR training data set.
This data set contains transcribed audio data for Sundanese (~220K utterances). The data set consists of wave files,
and a TSV file. The file utt_spk_text.tsv contains a FileID, UserID and the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
This dataset was collected by Google in Indonesia.
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/36/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2016, 2017 Google, Inc.
#### SLR41: High quality TTS data for Javanese.
This data set contains high-quality transcribed audio data for Javanese. The data set consists of wave files,
and a TSV file. The file line_index.tsv contains a filename and the transcription of audio in the file. Each
filename is prepended with a speaker identification number.
The data set has been manually quality checked, but there might still be errors.
This dataset was collected by Google in collaboration with Gadjah Mada University in Indonesia.
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/41/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2016, 2017, 2018 Google LLC
#### SLR42: High quality TTS data for Khmer.
This data set contains high-quality transcribed audio data for Khmer. The data set consists of wave files,
and a TSV file. The file line_index.tsv contains a filename and the transcription of audio in the file.
Each filename is prepended with a speaker identification number.
The data set has been manually quality checked, but there might still be errors.
This dataset was collected by Google.
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/42/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2016, 2017, 2018 Google LLC
#### SLR43: High quality TTS data for Nepali.
This data set contains high-quality transcribed audio data for Nepali. The data set consists of wave files,
and a TSV file. The file line_index.tsv contains a filename and the transcription of audio in the file.
Each filename is prepended with a speaker identification number.
The data set has been manually quality checked, but there might still be errors.
This dataset was collected by Google in Nepal.
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/43/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2016, 2017, 2018 Google LLC
#### SLR44: High quality TTS data for Sundanese.
This data set contains high-quality transcribed audio data for Sundanese. The data set consists of wave files,
and a TSV file. The file line_index.tsv contains a filename and the transcription of audio in the file.
Each filename is prepended with a speaker identification number.
The data set has been manually quality checked, but there might still be errors.
This dataset was collected by Google in collaboration with Universitas Pendidikan Indonesia.
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/44/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2016, 2017, 2018 Google LLC
#### SLR52: Large Sinhala ASR training data set.
This data set contains transcribed audio data for Sinhala (~185K utterances). The data set consists of wave files,
and a TSV file. The file utt_spk_text.tsv contains a FileID, UserID and the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/52/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2016, 2017, 2018 Google, Inc.
#### SLR53: Large Bengali ASR training data set.
This data set contains transcribed audio data for Bengali (~196K utterances). The data set consists of wave files,
and a TSV file. The file utt_spk_text.tsv contains a FileID, UserID and the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/53/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2016, 2017, 2018 Google, Inc.
#### SLR54: Large Nepali ASR training data set.
This data set contains transcribed audio data for Nepali (~157K utterances). The data set consists of wave files,
and a TSV file. The file utt_spk_text.tsv contains a FileID, UserID and the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/54/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2016, 2017, 2018 Google, Inc.
#### SLR63: Crowdsourced high-quality Malayalam multi-speaker speech data set
This data set contains transcribed high-quality audio of Malayalam sentences recorded by volunteers. The data set
consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and
the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/63/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019 Google, Inc.
#### SLR64: Crowdsourced high-quality Marathi multi-speaker speech data set
This data set contains transcribed high-quality audio of Marathi sentences recorded by volunteers. The data set
consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and
the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/64/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019 Google, Inc.
#### SLR65: Crowdsourced high-quality Tamil multi-speaker speech data set
This data set contains transcribed high-quality audio of Tamil sentences recorded by volunteers. The data set
consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and
the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/65/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019 Google, Inc.
#### SLR66: Crowdsourced high-quality Telugu multi-speaker speech data set
This data set contains transcribed high-quality audio of Telugu sentences recorded by volunteers. The data set
consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and
the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/66/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019 Google, Inc.
#### SLR69: Crowdsourced high-quality Catalan multi-speaker speech data set
This data set contains transcribed high-quality audio of Catalan sentences recorded by volunteers. The data set
consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and
the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/69/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019 Google, Inc.
#### SLR70: Crowdsourced high-quality Nigerian English speech data set
This data set contains transcribed high-quality audio of Nigerian English sentences recorded by volunteers. The data set
consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and
the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/70/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019 Google, Inc.
#### SLR71: Crowdsourced high-quality Chilean Spanish speech data set
This data set contains transcribed high-quality audio of Chilean Spanish sentences recorded by volunteers. The data set
consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and
the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/71/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019 Google, Inc.
#### SLR72: Crowdsourced high-quality Colombian Spanish speech data set
This data set contains transcribed high-quality audio of Colombian Spanish sentences recorded by volunteers. The data set
consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and
the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/72/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019 Google, Inc.
#### SLR73: Crowdsourced high-quality Peruvian Spanish speech data set
This data set contains transcribed high-quality audio of Peruvian Spanish sentences recorded by volunteers. The data set
consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and
the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/73/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019 Google, Inc.
#### SLR74: Crowdsourced high-quality Puerto Rico Spanish speech data set
This data set contains transcribed high-quality audio of Puerto Rico Spanish sentences recorded by volunteers. The data set
consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and
the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/74/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019 Google, Inc.
#### SLR75: Crowdsourced high-quality Venezuelan Spanish speech data set
This data set contains transcribed high-quality audio of Venezuelan Spanish sentences recorded by volunteers. The data set
consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and
the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/75/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019 Google, Inc.
#### SLR76: Crowdsourced high-quality Basque speech data set
This data set contains transcribed high-quality audio of Basque sentences recorded by volunteers. The data set
consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and
the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/76/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019 Google, Inc.
#### SLR77: Crowdsourced high-quality Galician speech data set
This data set contains transcribed high-quality audio of Galician sentences recorded by volunteers. The data set
consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and
the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/77/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019 Google, Inc.
#### SLR78: Crowdsourced high-quality Gujarati multi-speaker speech data set
This data set contains transcribed high-quality audio of Gujarati sentences recorded by volunteers. The data set
consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and
the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/78/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019 Google, Inc.
#### SLR79: Crowdsourced high-quality Kannada multi-speaker speech data set
This data set contains transcribed high-quality audio of Kannada sentences recorded by volunteers. The data set
consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and
the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/79/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019 Google, Inc.
#### SLR80: Crowdsourced high-quality Burmese speech data set
This data set contains transcribed high-quality audio of Burmese sentences recorded by volunteers. The data set
consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and
the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/80/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019 Google, Inc.
#### SLR83: Crowdsourced high-quality UK and Ireland English Dialect speech data set
This data set contains transcribed high-quality audio of English sentences recorded by volunteers speaking different dialects of the language.
The data set consists of wave files, and a TSV file (line_index.tsv). The file line_index.csv contains a line id, an anonymized FileID and the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
The recordings from the Welsh English speakers were collected in collaboration with Cardiff University.
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/83/LICENSE) file and https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019 Google, Inc.
#### SLR86: Crowdsourced high-quality multi-speaker speech data set
This data set contains transcribed high-quality audio of sentences recorded by volunteers. The data set
consists of wave files, and a TSV file (line_index.tsv). The file line_index.tsv contains a anonymized FileID and
the transcription of audio in the file.
The data set has been manually quality checked, but there might still be errors.
Please report any issues in the following issue tracker on GitHub. https://github.com/googlei18n/language-resources/issues
The dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License.
See [LICENSE](https://www.openslr.org/resources/86/LICENSE) file and
https://github.com/google/language-resources#license for license information.
Copyright 2018, 2019, 2020 Google, Inc.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Javanese, Khmer, Nepali, Sundanese, Malayalam, Marathi, Tamil, Telugu, Catalan, Nigerian English, Chilean Spanish,
Columbian Spanish, Peruvian Spanish, Puerto Rico Spanish, Venezuelan Spanish, Basque, Galician, Gujarati, Kannada,
Afrikaans, Sesotho, Setswana and isiXhosa.
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, called path and its sentence.
#### SLR32, SLR35, SLR36, SLR41, SLR42, SLR43, SLR44, SLR52, SLR53, SLR54, SLR63, SLR64, SLR65, SLR66, SLR69, SLR70, SLR71, SLR72, SLR73, SLR74, SLR75, SLR76, SLR77, SLR78, SLR79, SLR80, SLR86
```
{
'path': '/home/cahya/.cache/huggingface/datasets/downloads/extracted/4d9cf915efc21110199074da4d492566dee6097068b07a680f670fcec9176e62/su_id_female/wavs/suf_00297_00037352660.wav'
'audio': {'path': '/home/cahya/.cache/huggingface/datasets/downloads/extracted/4d9cf915efc21110199074da4d492566dee6097068b07a680f670fcec9176e62/su_id_female/wavs/suf_00297_00037352660.wav',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346,
0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'sentence': 'Panonton ting haruleng ningali Kelly Clarkson keur nyanyi di tipi',
}
```
### Data Fields
- `path`: The path to the audio file.
- `audio`: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling
rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and
resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might
take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column,
*i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- `sentence`: The sentence the user was prompted to speak.
### Data Splits
There is only one "train" split for all configurations and the number of examples are:
| | Number of examples |
|:------|---------------------:|
| SLR41 | 5822 |
| SLR42 | 2906 |
| SLR43 | 2064 |
| SLR44 | 4213 |
| SLR63 | 4126 |
| SLR64 | 1569 |
| SLR65 | 4284 |
| SLR66 | 4448 |
| SLR69 | 4240 |
| SLR35 | 185076 |
| SLR36 | 219156 |
| SLR70 | 3359 |
| SLR71 | 4374 |
| SLR72 | 4903 |
| SLR73 | 5447 |
| SLR74 | 617 |
| SLR75 | 3357 |
| SLR76 | 7136 |
| SLR77 | 5587 |
| SLR78 | 4272 |
| SLR79 | 4400 |
| SLR80 | 2530 |
| SLR86 | 3583 |
| SLR32 | 9821 |
| SLR52 | 185293 |
| SLR53 | 218703 |
| SLR54 | 157905 |
| SLR83 | 17877 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Each dataset is distributed under Creative Commons Attribution-ShareAlike 4.0 International Public License ([CC-BY-SA-4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode)).
See https://github.com/google/language-resources#license or the resource page on [OpenSLR](https://openslr.org/resources.php) for more information.
### Citation Information
#### SLR32
```
@inproceedings{van-niekerk-etal-2017,
title = {{Rapid development of TTS corpora for four South African languages}},
author = {Daniel van Niekerk and Charl van Heerden and Marelie Davel and Neil Kleynhans and Oddur Kjartansson and Martin Jansche and Linne Ha},
booktitle = {Proc. Interspeech 2017},
pages = {2178--2182},
address = {Stockholm, Sweden},
month = aug,
year = {2017},
URL = {https://dx.doi.org/10.21437/Interspeech.2017-1139}
}
```
#### SLR35, SLR36, SLR52, SLR53, SLR54
```
@inproceedings{kjartansson-etal-sltu2018,
title = {{Crowd-Sourced Speech Corpora for Javanese, Sundanese, Sinhala, Nepali, and Bangladeshi Bengali}},
author = {Oddur Kjartansson and Supheakmungkol Sarin and Knot Pipatsrisawat and Martin Jansche and Linne Ha},
booktitle = {Proc. The 6th Intl. Workshop on Spoken Language Technologies for Under-Resourced Languages (SLTU)},
year = {2018},
address = {Gurugram, India},
month = aug,
pages = {52--55},
URL = {https://dx.doi.org/10.21437/SLTU.2018-11},
}
```
#### SLR41, SLR42, SLR43, SLR44
```
@inproceedings{kjartansson-etal-tts-sltu2018,
title = {{A Step-by-Step Process for Building TTS Voices Using Open Source Data and Framework for Bangla, Javanese, Khmer, Nepali, Sinhala, and Sundanese}},
author = {Keshan Sodimana and Knot Pipatsrisawat and Linne Ha and Martin Jansche and Oddur Kjartansson and Pasindu De Silva and Supheakmungkol Sarin},
booktitle = {Proc. The 6th Intl. Workshop on Spoken Language Technologies for Under-Resourced Languages (SLTU)},
year = {2018},
address = {Gurugram, India},
month = aug,
pages = {66--70},
URL = {https://dx.doi.org/10.21437/SLTU.2018-14}
}
```
#### SLR63, SLR64, SLR65, SLR66, SLR78, SLR79
```
@inproceedings{he-etal-2020-open,
title = {{Open-source Multi-speaker Speech Corpora for Building Gujarati, Kannada, Malayalam, Marathi, Tamil and Telugu Speech Synthesis Systems}},
author = {He, Fei and Chu, Shan-Hui Cathy and Kjartansson, Oddur and Rivera, Clara and Katanova, Anna and Gutkin, Alexander and Demirsahin, Isin and Johny, Cibu and Jansche, Martin and Sarin, Supheakmungkol and Pipatsrisawat, Knot},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference (LREC)},
month = may,
year = {2020},
address = {Marseille, France},
publisher = {European Language Resources Association (ELRA)},
pages = {6494--6503},
url = {https://www.aclweb.org/anthology/2020.lrec-1.800},
ISBN = "{979-10-95546-34-4},
}
```
#### SLR69, SLR76, SLR77
```
@inproceedings{kjartansson-etal-2020-open,
title = {{Open-Source High Quality Speech Datasets for Basque, Catalan and Galician}},
author = {Kjartansson, Oddur and Gutkin, Alexander and Butryna, Alena and Demirsahin, Isin and Rivera, Clara},
booktitle = {Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL)},
year = {2020},
pages = {21--27},
month = may,
address = {Marseille, France},
publisher = {European Language Resources association (ELRA)},
url = {https://www.aclweb.org/anthology/2020.sltu-1.3},
ISBN = {979-10-95546-35-1},
}
```
#### SLR70, SLR71, SLR72, SLR73, SLR74, SLR75
```
@inproceedings{guevara-rukoz-etal-2020-crowdsourcing,
title = {{Crowdsourcing Latin American Spanish for Low-Resource Text-to-Speech}},
author = {Guevara-Rukoz, Adriana and Demirsahin, Isin and He, Fei and Chu, Shan-Hui Cathy and Sarin, Supheakmungkol and Pipatsrisawat, Knot and Gutkin, Alexander and Butryna, Alena and Kjartansson, Oddur},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference (LREC)},
year = {2020},
month = may,
address = {Marseille, France},
publisher = {European Language Resources Association (ELRA)},
url = {https://www.aclweb.org/anthology/2020.lrec-1.801},
pages = {6504--6513},
ISBN = {979-10-95546-34-4},
}
```
#### SLR80
```
@inproceedings{oo-etal-2020-burmese,
title = {{Burmese Speech Corpus, Finite-State Text Normalization and Pronunciation Grammars with an Application to Text-to-Speech}},
author = {Oo, Yin May and Wattanavekin, Theeraphol and Li, Chenfang and De Silva, Pasindu and Sarin, Supheakmungkol and Pipatsrisawat, Knot and Jansche, Martin and Kjartansson, Oddur and Gutkin, Alexander},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference (LREC)},
month = may,
year = {2020},
pages = "6328--6339",
address = {Marseille, France},
publisher = {European Language Resources Association (ELRA)},
url = {https://www.aclweb.org/anthology/2020.lrec-1.777},
ISBN = {979-10-95546-34-4},
}
```
#### SLR86
```
@inproceedings{gutkin-et-al-yoruba2020,
title = {{Developing an Open-Source Corpus of Yoruba Speech}},
author = {Alexander Gutkin and I{\c{s}}{\i}n Demir{\c{s}}ahin and Oddur Kjartansson and Clara Rivera and K\d{\'o}lá Túb\d{\`o}sún},
booktitle = {Proceedings of Interspeech 2020},
pages = {404--408},
month = {October},
year = {2020},
address = {Shanghai, China},
publisher = {International Speech and Communication Association (ISCA)},
doi = {10.21437/Interspeech.2020-1096},
url = {https://dx.doi.org/10.21437/Interspeech.2020-1096},
}
```
### Contributions
Thanks to [@cahya-wirawan](https://github.com/cahya-wirawan) for adding this dataset. |
openwebtext | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- cc0-1.0
multilinguality:
- monolingual
pretty_name: OpenWebText
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: openwebtext
dataset_info:
features:
- name: text
dtype: string
config_name: plain_text
splits:
- name: train
num_bytes: 39769491688
num_examples: 8013769
download_size: 12880189440
dataset_size: 39769491688
---
# Dataset Card for "openwebtext"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://skylion007.github.io/OpenWebTextCorpus/](https://skylion007.github.io/OpenWebTextCorpus/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 13.51 GB
- **Size of the generated dataset:** 41.70 GB
- **Total amount of disk used:** 55.21 GB
### Dataset Summary
An open-source replication of the WebText dataset from OpenAI, that was used to train GPT-2.
This distribution was created by Aaron Gokaslan and Vanya Cohen of Brown University.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### plain_text
- **Size of downloaded dataset files:** 13.51 GB
- **Size of the generated dataset:** 41.70 GB
- **Total amount of disk used:** 55.21 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "\"A magazine supplement with an image of Adolf Hitler and the title 'The Unreadable Book' is pictured in Berlin. No law bans “Mei..."
}
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `text`: a `string` feature.
### Data Splits
| name | train |
|------------|--------:|
| plain_text | 8013769 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
The authors started by extracting all Reddit post urls from the Reddit submissions dataset. These links were deduplicated, filtered to exclude non-html content, and then shuffled randomly. The links were then distributed to several machines in parallel for download, and all web pages were extracted using the newspaper python package. Using Facebook FastText, non-English web pages were filtered out.
Subsequently, near-duplicate documents were identified using local-sensitivity hashing (LSH). Documents were hashed into sets of 5-grams and all documents that had a similarity threshold of greater than 0.5 were removed. The the remaining documents were tokenized, and documents with fewer than 128 tokens were removed. This left 38GB of text data (40GB using SI units) from 8,013,769 documents.
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
The dataset doesn't contain annotations.
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
These data are released under this licensing scheme from the original authors ([source](https://skylion007.github.io/OpenWebTextCorpus/)):
```
We do not own any of the text from which these data has been extracted.
We license the actual packaging of these parallel data under the [Creative Commons CC0 license (“no rights reserved”)](https://creativecommons.org/share-your-work/public-domain/cc0/)
```
#### Notice policy
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
Clearly identify the copyrighted work claimed to be infringed.
Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
And contact us at the following email address: openwebtext at gmail.com and datasets at huggingface.co
#### Take down policy
The original authors will comply to legitimate requests by removing the affected sources from the next release of the corpus.
Hugging Face will also update this repository accordingly.
### Citation Information
```
@misc{Gokaslan2019OpenWeb,
title={OpenWebText Corpus},
author={Aaron Gokaslan*, Vanya Cohen*, Ellie Pavlick, Stefanie Tellex},
howpublished{\url{http://Skylion007.github.io/OpenWebTextCorpus}},
year={2019}
}
```
### Contributions
Thanks to [@richarddwang](https://github.com/richarddwang) for adding this dataset.
|
opinosis | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- apache-2.0
multilinguality:
- monolingual
pretty_name: Opinosis
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- summarization
task_ids: []
paperswithcode_id: opinosis
tags:
- abstractive-summarization
dataset_info:
features:
- name: review_sents
dtype: string
- name: summaries
sequence: string
splits:
- name: train
num_bytes: 741270
num_examples: 51
download_size: 757398
dataset_size: 741270
---
# Dataset Card for "opinosis"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://kavita-ganesan.com/opinosis-opinion-dataset/
- **Repository:** https://github.com/kavgan/opinosis-summarization
- **Paper:** [Opinosis: A Graph Based Approach to Abstractive Summarization of Highly Redundant Opinions](https://aclanthology.org/C10-1039/)
- **Point of Contact:** [Kavita Ganesan](mailto:kavita@opinosis.ai)
- **Size of downloaded dataset files:** 0.75 MB
- **Size of the generated dataset:** 0.74 MB
- **Total amount of disk used:** 1.50 MB
### Dataset Summary
The Opinosis Opinion Dataset consists of sentences extracted from reviews for 51 topics.
Topics and opinions are obtained from Tripadvisor, Edmunds.com and Amazon.com.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 0.75 MB
- **Size of the generated dataset:** 0.74 MB
- **Total amount of disk used:** 1.50 MB
An example of 'train' looks as follows.
```
{
"review_sents": "This is a fake topic. \nThe topics have multiple sentence inputs. \n",
"summaries": ["This is a gold summary for topic 1. \nSentences in gold summaries are separated by newlines.", "This is another gold summary for topic 1. \nSentences in gold summaries are separated by newlines."]
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `review_sents`: a `string` feature.
- `summaries`: a `list` of `string` features.
### Data Splits
| name |train|
|-------|----:|
|default| 51|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The license for this dataset is Apache License 2.0 and can be found [here](https://github.com/kavgan/opinosis-summarization/blob/master/LICENSE).
### Citation Information
```
@inproceedings{ganesan2010opinosis,
title={Opinosis: a graph-based approach to abstractive summarization of highly redundant opinions},
author={Ganesan, Kavita and Zhai, ChengXiang and Han, Jiawei},
booktitle={Proceedings of the 23rd International Conference on Computational Linguistics},
pages={340--348},
year={2010},
organization={Association for Computational Linguistics}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
opus100 | ---
pretty_name: Opus100
task_categories:
- translation
multilinguality:
- translation
task_ids: []
language:
- af
- am
- an
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- dz
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- ig
- is
- it
- ja
- ka
- kk
- km
- kn
- ko
- ku
- ky
- li
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- mt
- my
- nb
- ne
- nl
- nn
- 'no'
- oc
- or
- pa
- pl
- ps
- pt
- ro
- ru
- rw
- se
- sh
- si
- sk
- sl
- sq
- sr
- sv
- ta
- te
- tg
- th
- tk
- tr
- tt
- ug
- uk
- ur
- uz
- vi
- wa
- xh
- yi
- yo
- zh
- zu
annotations_creators:
- no-annotation
language_creators:
- found
source_datasets:
- extended
size_categories:
- 100K<n<1M
- 10K<n<100K
- 1K<n<10K
- 1M<n<10M
- n<1K
license:
- unknown
paperswithcode_id: opus-100
configs:
- af-en
- am-en
- an-en
- ar-de
- ar-en
- ar-fr
- ar-nl
- ar-ru
- ar-zh
- as-en
- az-en
- be-en
- bg-en
- bn-en
- br-en
- bs-en
- ca-en
- cs-en
- cy-en
- da-en
- de-en
- de-fr
- de-nl
- de-ru
- de-zh
- dz-en
- el-en
- en-eo
- en-es
- en-et
- en-eu
- en-fa
- en-fi
- en-fr
- en-fy
- en-ga
- en-gd
- en-gl
- en-gu
- en-ha
- en-he
- en-hi
- en-hr
- en-hu
- en-hy
- en-id
- en-ig
- en-is
- en-it
- en-ja
- en-ka
- en-kk
- en-km
- en-kn
- en-ko
- en-ku
- en-ky
- en-li
- en-lt
- en-lv
- en-mg
- en-mk
- en-ml
- en-mn
- en-mr
- en-ms
- en-mt
- en-my
- en-nb
- en-ne
- en-nl
- en-nn
- en-no
- en-oc
- en-or
- en-pa
- en-pl
- en-ps
- en-pt
- en-ro
- en-ru
- en-rw
- en-se
- en-sh
- en-si
- en-sk
- en-sl
- en-sq
- en-sr
- en-sv
- en-ta
- en-te
- en-tg
- en-th
- en-tk
- en-tr
- en-tt
- en-ug
- en-uk
- en-ur
- en-uz
- en-vi
- en-wa
- en-xh
- en-yi
- en-yo
- en-zh
- en-zu
- fr-nl
- fr-ru
- fr-zh
- nl-ru
- nl-zh
- ru-zh
dataset_info:
- config_name: af-en
features:
- name: translation
dtype:
translation:
languages:
- af
- en
splits:
- name: test
num_bytes: 135916
num_examples: 2000
- name: train
num_bytes: 18726471
num_examples: 275512
- name: validation
num_bytes: 132777
num_examples: 2000
download_size: 7505036
dataset_size: 18995164
- config_name: am-en
features:
- name: translation
dtype:
translation:
languages:
- am
- en
splits:
- name: test
num_bytes: 588029
num_examples: 2000
- name: train
num_bytes: 21950644
num_examples: 89027
- name: validation
num_bytes: 566077
num_examples: 2000
download_size: 7004193
dataset_size: 23104750
- config_name: an-en
features:
- name: translation
dtype:
translation:
languages:
- an
- en
splits:
- name: train
num_bytes: 438332
num_examples: 6961
download_size: 96148
dataset_size: 438332
- config_name: ar-en
features:
- name: translation
dtype:
translation:
languages:
- ar
- en
splits:
- name: test
num_bytes: 331648
num_examples: 2000
- name: train
num_bytes: 152766484
num_examples: 1000000
- name: validation
num_bytes: 2272106
num_examples: 2000
download_size: 55286865
dataset_size: 155370238
- config_name: as-en
features:
- name: translation
dtype:
translation:
languages:
- as
- en
splits:
- name: test
num_bytes: 261466
num_examples: 2000
- name: train
num_bytes: 15634648
num_examples: 138479
- name: validation
num_bytes: 248139
num_examples: 2000
download_size: 4183517
dataset_size: 16144253
- config_name: az-en
features:
- name: translation
dtype:
translation:
languages:
- az
- en
splits:
- name: test
num_bytes: 393109
num_examples: 2000
- name: train
num_bytes: 56431259
num_examples: 262089
- name: validation
num_bytes: 407109
num_examples: 2000
download_size: 18897341
dataset_size: 57231477
- config_name: be-en
features:
- name: translation
dtype:
translation:
languages:
- be
- en
splits:
- name: test
num_bytes: 166858
num_examples: 2000
- name: train
num_bytes: 5298500
num_examples: 67312
- name: validation
num_bytes: 175205
num_examples: 2000
download_size: 1906088
dataset_size: 5640563
- config_name: bg-en
features:
- name: translation
dtype:
translation:
languages:
- bg
- en
splits:
- name: test
num_bytes: 243751
num_examples: 2000
- name: train
num_bytes: 108930347
num_examples: 1000000
- name: validation
num_bytes: 234848
num_examples: 2000
download_size: 36980744
dataset_size: 109408946
- config_name: bn-en
features:
- name: translation
dtype:
translation:
languages:
- bn
- en
splits:
- name: test
num_bytes: 510101
num_examples: 2000
- name: train
num_bytes: 249906846
num_examples: 1000000
- name: validation
num_bytes: 498414
num_examples: 2000
download_size: 72999655
dataset_size: 250915361
- config_name: br-en
features:
- name: translation
dtype:
translation:
languages:
- br
- en
splits:
- name: test
num_bytes: 127925
num_examples: 2000
- name: train
num_bytes: 8539006
num_examples: 153447
- name: validation
num_bytes: 133772
num_examples: 2000
download_size: 3323458
dataset_size: 8800703
- config_name: bs-en
features:
- name: translation
dtype:
translation:
languages:
- bs
- en
splits:
- name: test
num_bytes: 168622
num_examples: 2000
- name: train
num_bytes: 75082948
num_examples: 1000000
- name: validation
num_bytes: 172481
num_examples: 2000
download_size: 30746956
dataset_size: 75424051
- config_name: ca-en
features:
- name: translation
dtype:
translation:
languages:
- ca
- en
splits:
- name: test
num_bytes: 205666
num_examples: 2000
- name: train
num_bytes: 88405510
num_examples: 1000000
- name: validation
num_bytes: 212637
num_examples: 2000
download_size: 36267794
dataset_size: 88823813
- config_name: cs-en
features:
- name: translation
dtype:
translation:
languages:
- cs
- en
splits:
- name: test
num_bytes: 205274
num_examples: 2000
- name: train
num_bytes: 91897719
num_examples: 1000000
- name: validation
num_bytes: 219084
num_examples: 2000
download_size: 39673827
dataset_size: 92322077
- config_name: cy-en
features:
- name: translation
dtype:
translation:
languages:
- cy
- en
splits:
- name: test
num_bytes: 124289
num_examples: 2000
- name: train
num_bytes: 17244980
num_examples: 289521
- name: validation
num_bytes: 118856
num_examples: 2000
download_size: 6487005
dataset_size: 17488125
- config_name: da-en
features:
- name: translation
dtype:
translation:
languages:
- da
- en
splits:
- name: test
num_bytes: 298123
num_examples: 2000
- name: train
num_bytes: 126425274
num_examples: 1000000
- name: validation
num_bytes: 300624
num_examples: 2000
download_size: 50404122
dataset_size: 127024021
- config_name: de-en
features:
- name: translation
dtype:
translation:
languages:
- de
- en
splits:
- name: test
num_bytes: 330959
num_examples: 2000
- name: train
num_bytes: 152246756
num_examples: 1000000
- name: validation
num_bytes: 332350
num_examples: 2000
download_size: 67205361
dataset_size: 152910065
- config_name: dz-en
features:
- name: translation
dtype:
translation:
languages:
- dz
- en
splits:
- name: train
num_bytes: 81162
num_examples: 624
download_size: 17814
dataset_size: 81162
- config_name: el-en
features:
- name: translation
dtype:
translation:
languages:
- el
- en
splits:
- name: test
num_bytes: 302393
num_examples: 2000
- name: train
num_bytes: 127964703
num_examples: 1000000
- name: validation
num_bytes: 291234
num_examples: 2000
download_size: 43973686
dataset_size: 128558330
- config_name: en-eo
features:
- name: translation
dtype:
translation:
languages:
- en
- eo
splits:
- name: test
num_bytes: 167386
num_examples: 2000
- name: train
num_bytes: 24431953
num_examples: 337106
- name: validation
num_bytes: 168838
num_examples: 2000
download_size: 9999313
dataset_size: 24768177
- config_name: en-es
features:
- name: translation
dtype:
translation:
languages:
- en
- es
splits:
- name: test
num_bytes: 326270
num_examples: 2000
- name: train
num_bytes: 136643904
num_examples: 1000000
- name: validation
num_bytes: 326735
num_examples: 2000
download_size: 55534068
dataset_size: 137296909
- config_name: en-et
features:
- name: translation
dtype:
translation:
languages:
- en
- et
splits:
- name: test
num_bytes: 272171
num_examples: 2000
- name: train
num_bytes: 112299053
num_examples: 1000000
- name: validation
num_bytes: 276962
num_examples: 2000
download_size: 46235623
dataset_size: 112848186
- config_name: en-eu
features:
- name: translation
dtype:
translation:
languages:
- en
- eu
splits:
- name: test
num_bytes: 280885
num_examples: 2000
- name: train
num_bytes: 112330085
num_examples: 1000000
- name: validation
num_bytes: 281503
num_examples: 2000
download_size: 46389313
dataset_size: 112892473
- config_name: en-fa
features:
- name: translation
dtype:
translation:
languages:
- en
- fa
splits:
- name: test
num_bytes: 296556
num_examples: 2000
- name: train
num_bytes: 125401335
num_examples: 1000000
- name: validation
num_bytes: 291129
num_examples: 2000
download_size: 44568447
dataset_size: 125989020
- config_name: en-fi
features:
- name: translation
dtype:
translation:
languages:
- en
- fi
splits:
- name: test
num_bytes: 245822
num_examples: 2000
- name: train
num_bytes: 106025790
num_examples: 1000000
- name: validation
num_bytes: 247227
num_examples: 2000
download_size: 42563103
dataset_size: 106518839
- config_name: en-fr
features:
- name: translation
dtype:
translation:
languages:
- en
- fr
splits:
- name: test
num_bytes: 469731
num_examples: 2000
- name: train
num_bytes: 201441250
num_examples: 1000000
- name: validation
num_bytes: 481484
num_examples: 2000
download_size: 81009778
dataset_size: 202392465
- config_name: en-fy
features:
- name: translation
dtype:
translation:
languages:
- en
- fy
splits:
- name: test
num_bytes: 101246
num_examples: 2000
- name: train
num_bytes: 3895688
num_examples: 54342
- name: validation
num_bytes: 100129
num_examples: 2000
download_size: 1522187
dataset_size: 4097063
- config_name: en-ga
features:
- name: translation
dtype:
translation:
languages:
- en
- ga
splits:
- name: test
num_bytes: 503317
num_examples: 2000
- name: train
num_bytes: 42132742
num_examples: 289524
- name: validation
num_bytes: 503217
num_examples: 2000
download_size: 14998873
dataset_size: 43139276
- config_name: en-gd
features:
- name: translation
dtype:
translation:
languages:
- en
- gd
splits:
- name: test
num_bytes: 218362
num_examples: 1606
- name: train
num_bytes: 1254795
num_examples: 16316
- name: validation
num_bytes: 203885
num_examples: 1605
download_size: 564053
dataset_size: 1677042
- config_name: en-gl
features:
- name: translation
dtype:
translation:
languages:
- en
- gl
splits:
- name: test
num_bytes: 190699
num_examples: 2000
- name: train
num_bytes: 43327444
num_examples: 515344
- name: validation
num_bytes: 193606
num_examples: 2000
download_size: 18056665
dataset_size: 43711749
- config_name: en-gu
features:
- name: translation
dtype:
translation:
languages:
- en
- gu
splits:
- name: test
num_bytes: 199733
num_examples: 2000
- name: train
num_bytes: 33641975
num_examples: 318306
- name: validation
num_bytes: 205550
num_examples: 2000
download_size: 9407543
dataset_size: 34047258
- config_name: en-ha
features:
- name: translation
dtype:
translation:
languages:
- en
- ha
splits:
- name: test
num_bytes: 407352
num_examples: 2000
- name: train
num_bytes: 20391964
num_examples: 97983
- name: validation
num_bytes: 411526
num_examples: 2000
download_size: 6898482
dataset_size: 21210842
- config_name: en-he
features:
- name: translation
dtype:
translation:
languages:
- en
- he
splits:
- name: test
num_bytes: 208475
num_examples: 2000
- name: train
num_bytes: 91160431
num_examples: 1000000
- name: validation
num_bytes: 209446
num_examples: 2000
download_size: 31214136
dataset_size: 91578352
- config_name: en-hi
features:
- name: translation
dtype:
translation:
languages:
- en
- hi
splits:
- name: test
num_bytes: 496578
num_examples: 2000
- name: train
num_bytes: 124923977
num_examples: 534319
- name: validation
num_bytes: 474087
num_examples: 2000
download_size: 35993452
dataset_size: 125894642
- config_name: en-hr
features:
- name: translation
dtype:
translation:
languages:
- en
- hr
splits:
- name: test
num_bytes: 179644
num_examples: 2000
- name: train
num_bytes: 75310316
num_examples: 1000000
- name: validation
num_bytes: 179623
num_examples: 2000
download_size: 30728154
dataset_size: 75669583
- config_name: en-hu
features:
- name: translation
dtype:
translation:
languages:
- en
- hu
splits:
- name: test
num_bytes: 206047
num_examples: 2000
- name: train
num_bytes: 87484262
num_examples: 1000000
- name: validation
num_bytes: 208315
num_examples: 2000
download_size: 35696235
dataset_size: 87898624
- config_name: en-hy
features:
- name: translation
dtype:
translation:
languages:
- en
- hy
splits:
- name: train
num_bytes: 652631
num_examples: 7059
download_size: 215246
dataset_size: 652631
- config_name: en-id
features:
- name: translation
dtype:
translation:
languages:
- en
- id
splits:
- name: test
num_bytes: 177693
num_examples: 2000
- name: train
num_bytes: 78699773
num_examples: 1000000
- name: validation
num_bytes: 180032
num_examples: 2000
download_size: 29914089
dataset_size: 79057498
- config_name: en-ig
features:
- name: translation
dtype:
translation:
languages:
- en
- ig
splits:
- name: test
num_bytes: 137332
num_examples: 1843
- name: train
num_bytes: 1612539
num_examples: 18415
- name: validation
num_bytes: 135995
num_examples: 1843
download_size: 391849
dataset_size: 1885866
- config_name: en-is
features:
- name: translation
dtype:
translation:
languages:
- en
- is
splits:
- name: test
num_bytes: 170887
num_examples: 2000
- name: train
num_bytes: 73964915
num_examples: 1000000
- name: validation
num_bytes: 170640
num_examples: 2000
download_size: 28831218
dataset_size: 74306442
- config_name: en-it
features:
- name: translation
dtype:
translation:
languages:
- en
- it
splits:
- name: test
num_bytes: 299037
num_examples: 2000
- name: train
num_bytes: 123655086
num_examples: 1000000
- name: validation
num_bytes: 294362
num_examples: 2000
download_size: 50903618
dataset_size: 124248485
- config_name: en-ja
features:
- name: translation
dtype:
translation:
languages:
- en
- ja
splits:
- name: test
num_bytes: 190999
num_examples: 2000
- name: train
num_bytes: 88349369
num_examples: 1000000
- name: validation
num_bytes: 191419
num_examples: 2000
download_size: 34452575
dataset_size: 88731787
- config_name: en-ka
features:
- name: translation
dtype:
translation:
languages:
- en
- ka
splits:
- name: test
num_bytes: 256227
num_examples: 2000
- name: train
num_bytes: 42465706
num_examples: 377306
- name: validation
num_bytes: 260416
num_examples: 2000
download_size: 12743188
dataset_size: 42982349
- config_name: en-kk
features:
- name: translation
dtype:
translation:
languages:
- en
- kk
splits:
- name: test
num_bytes: 137664
num_examples: 2000
- name: train
num_bytes: 7124378
num_examples: 79927
- name: validation
num_bytes: 139665
num_examples: 2000
download_size: 2425372
dataset_size: 7401707
- config_name: en-km
features:
- name: translation
dtype:
translation:
languages:
- en
- km
splits:
- name: test
num_bytes: 289027
num_examples: 2000
- name: train
num_bytes: 19680611
num_examples: 111483
- name: validation
num_bytes: 302527
num_examples: 2000
download_size: 5193620
dataset_size: 20272165
- config_name: en-ko
features:
- name: translation
dtype:
translation:
languages:
- en
- ko
splits:
- name: test
num_bytes: 190696
num_examples: 2000
- name: train
num_bytes: 93665332
num_examples: 1000000
- name: validation
num_bytes: 189368
num_examples: 2000
download_size: 37602794
dataset_size: 94045396
- config_name: en-kn
features:
- name: translation
dtype:
translation:
languages:
- en
- kn
splits:
- name: test
num_bytes: 77205
num_examples: 918
- name: train
num_bytes: 1833334
num_examples: 14537
- name: validation
num_bytes: 77607
num_examples: 917
download_size: 525449
dataset_size: 1988146
- config_name: en-ku
features:
- name: translation
dtype:
translation:
languages:
- en
- ku
splits:
- name: test
num_bytes: 247847
num_examples: 2000
- name: train
num_bytes: 49107864
num_examples: 144844
- name: validation
num_bytes: 239325
num_examples: 2000
download_size: 14252198
dataset_size: 49595036
- config_name: en-ky
features:
- name: translation
dtype:
translation:
languages:
- en
- ky
splits:
- name: test
num_bytes: 142530
num_examples: 2000
- name: train
num_bytes: 1879298
num_examples: 27215
- name: validation
num_bytes: 138487
num_examples: 2000
download_size: 616902
dataset_size: 2160315
- config_name: en-li
features:
- name: translation
dtype:
translation:
languages:
- en
- li
splits:
- name: test
num_bytes: 93350
num_examples: 2000
- name: train
num_bytes: 1628601
num_examples: 25535
- name: validation
num_bytes: 92906
num_examples: 2000
download_size: 450092
dataset_size: 1814857
- config_name: en-lt
features:
- name: translation
dtype:
translation:
languages:
- en
- lt
splits:
- name: test
num_bytes: 482615
num_examples: 2000
- name: train
num_bytes: 177061044
num_examples: 1000000
- name: validation
num_bytes: 469117
num_examples: 2000
download_size: 69388131
dataset_size: 178012776
- config_name: en-lv
features:
- name: translation
dtype:
translation:
languages:
- en
- lv
splits:
- name: test
num_bytes: 536576
num_examples: 2000
- name: train
num_bytes: 206051849
num_examples: 1000000
- name: validation
num_bytes: 522072
num_examples: 2000
download_size: 78952903
dataset_size: 207110497
- config_name: en-mg
features:
- name: translation
dtype:
translation:
languages:
- en
- mg
splits:
- name: test
num_bytes: 525067
num_examples: 2000
- name: train
num_bytes: 130865649
num_examples: 590771
- name: validation
num_bytes: 511171
num_examples: 2000
download_size: 52470504
dataset_size: 131901887
- config_name: en-mk
features:
- name: translation
dtype:
translation:
languages:
- en
- mk
splits:
- name: test
num_bytes: 308934
num_examples: 2000
- name: train
num_bytes: 117069489
num_examples: 1000000
- name: validation
num_bytes: 305498
num_examples: 2000
download_size: 39517761
dataset_size: 117683921
- config_name: en-ml
features:
- name: translation
dtype:
translation:
languages:
- en
- ml
splits:
- name: test
num_bytes: 340626
num_examples: 2000
- name: train
num_bytes: 199971743
num_examples: 822746
- name: validation
num_bytes: 334459
num_examples: 2000
download_size: 48654808
dataset_size: 200646828
- config_name: en-mn
features:
- name: translation
dtype:
translation:
languages:
- en
- mn
splits:
- name: train
num_bytes: 250778
num_examples: 4294
download_size: 42039
dataset_size: 250778
- config_name: en-mr
features:
- name: translation
dtype:
translation:
languages:
- en
- mr
splits:
- name: test
num_bytes: 238612
num_examples: 2000
- name: train
num_bytes: 2724131
num_examples: 27007
- name: validation
num_bytes: 235540
num_examples: 2000
download_size: 910211
dataset_size: 3198283
- config_name: en-ms
features:
- name: translation
dtype:
translation:
languages:
- en
- ms
splits:
- name: test
num_bytes: 179705
num_examples: 2000
- name: train
num_bytes: 76829645
num_examples: 1000000
- name: validation
num_bytes: 180183
num_examples: 2000
download_size: 29807607
dataset_size: 77189533
- config_name: en-mt
features:
- name: translation
dtype:
translation:
languages:
- en
- mt
splits:
- name: test
num_bytes: 566134
num_examples: 2000
- name: train
num_bytes: 222222396
num_examples: 1000000
- name: validation
num_bytes: 594386
num_examples: 2000
download_size: 84757608
dataset_size: 223382916
- config_name: en-my
features:
- name: translation
dtype:
translation:
languages:
- en
- my
splits:
- name: test
num_bytes: 337351
num_examples: 2000
- name: train
num_bytes: 3673501
num_examples: 24594
- name: validation
num_bytes: 336155
num_examples: 2000
download_size: 1038600
dataset_size: 4347007
- config_name: en-nb
features:
- name: translation
dtype:
translation:
languages:
- en
- nb
splits:
- name: test
num_bytes: 334117
num_examples: 2000
- name: train
num_bytes: 13611709
num_examples: 142906
- name: validation
num_bytes: 324400
num_examples: 2000
download_size: 5706626
dataset_size: 14270226
- config_name: en-ne
features:
- name: translation
dtype:
translation:
languages:
- en
- ne
splits:
- name: test
num_bytes: 186527
num_examples: 2000
- name: train
num_bytes: 44136280
num_examples: 406381
- name: validation
num_bytes: 204920
num_examples: 2000
download_size: 11711988
dataset_size: 44527727
- config_name: en-nl
features:
- name: translation
dtype:
translation:
languages:
- en
- nl
splits:
- name: test
num_bytes: 282755
num_examples: 2000
- name: train
num_bytes: 112327073
num_examples: 1000000
- name: validation
num_bytes: 270940
num_examples: 2000
download_size: 45374708
dataset_size: 112880768
- config_name: en-nn
features:
- name: translation
dtype:
translation:
languages:
- en
- nn
splits:
- name: test
num_bytes: 179007
num_examples: 2000
- name: train
num_bytes: 32924821
num_examples: 486055
- name: validation
num_bytes: 187650
num_examples: 2000
download_size: 12742134
dataset_size: 33291478
- config_name: en-no
features:
- name: translation
dtype:
translation:
languages:
- en
- 'no'
splits:
- name: test
num_bytes: 173328
num_examples: 2000
- name: train
num_bytes: 74106283
num_examples: 1000000
- name: validation
num_bytes: 178013
num_examples: 2000
download_size: 28851262
dataset_size: 74457624
- config_name: en-oc
features:
- name: translation
dtype:
translation:
languages:
- en
- oc
splits:
- name: test
num_bytes: 82350
num_examples: 2000
- name: train
num_bytes: 1627206
num_examples: 35791
- name: validation
num_bytes: 81650
num_examples: 2000
download_size: 607192
dataset_size: 1791206
- config_name: en-or
features:
- name: translation
dtype:
translation:
languages:
- en
- or
splits:
- name: test
num_bytes: 163947
num_examples: 1318
- name: train
num_bytes: 1500749
num_examples: 14273
- name: validation
num_bytes: 155331
num_examples: 1317
download_size: 499401
dataset_size: 1820027
- config_name: en-pa
features:
- name: translation
dtype:
translation:
languages:
- en
- pa
splits:
- name: test
num_bytes: 133909
num_examples: 2000
- name: train
num_bytes: 8509228
num_examples: 107296
- name: validation
num_bytes: 136196
num_examples: 2000
download_size: 2589682
dataset_size: 8779333
- config_name: en-pl
features:
- name: translation
dtype:
translation:
languages:
- en
- pl
splits:
- name: test
num_bytes: 212503
num_examples: 2000
- name: train
num_bytes: 95248523
num_examples: 1000000
- name: validation
num_bytes: 218216
num_examples: 2000
download_size: 39320454
dataset_size: 95679242
- config_name: en-ps
features:
- name: translation
dtype:
translation:
languages:
- en
- ps
splits:
- name: test
num_bytes: 93003
num_examples: 2000
- name: train
num_bytes: 4436576
num_examples: 79127
- name: validation
num_bytes: 95164
num_examples: 2000
download_size: 1223087
dataset_size: 4624743
- config_name: en-pt
features:
- name: translation
dtype:
translation:
languages:
- en
- pt
splits:
- name: test
num_bytes: 296122
num_examples: 2000
- name: train
num_bytes: 118243649
num_examples: 1000000
- name: validation
num_bytes: 292082
num_examples: 2000
download_size: 48087550
dataset_size: 118831853
- config_name: en-ro
features:
- name: translation
dtype:
translation:
languages:
- en
- ro
splits:
- name: test
num_bytes: 198647
num_examples: 2000
- name: train
num_bytes: 85249851
num_examples: 1000000
- name: validation
num_bytes: 199172
num_examples: 2000
download_size: 35032743
dataset_size: 85647670
- config_name: en-ru
features:
- name: translation
dtype:
translation:
languages:
- en
- ru
splits:
- name: test
num_bytes: 490984
num_examples: 2000
- name: train
num_bytes: 195101737
num_examples: 1000000
- name: validation
num_bytes: 490246
num_examples: 2000
download_size: 68501634
dataset_size: 196082967
- config_name: en-rw
features:
- name: translation
dtype:
translation:
languages:
- en
- rw
splits:
- name: test
num_bytes: 136197
num_examples: 2000
- name: train
num_bytes: 15286303
num_examples: 173823
- name: validation
num_bytes: 134965
num_examples: 2000
download_size: 5233241
dataset_size: 15557465
- config_name: en-se
features:
- name: translation
dtype:
translation:
languages:
- en
- se
splits:
- name: test
num_bytes: 85705
num_examples: 2000
- name: train
num_bytes: 2047412
num_examples: 35907
- name: validation
num_bytes: 83672
num_examples: 2000
download_size: 806982
dataset_size: 2216789
- config_name: en-sh
features:
- name: translation
dtype:
translation:
languages:
- en
- sh
splits:
- name: test
num_bytes: 569487
num_examples: 2000
- name: train
num_bytes: 60900239
num_examples: 267211
- name: validation
num_bytes: 555602
num_examples: 2000
download_size: 22357505
dataset_size: 62025328
- config_name: en-si
features:
- name: translation
dtype:
translation:
languages:
- en
- si
splits:
- name: test
num_bytes: 271743
num_examples: 2000
- name: train
num_bytes: 114951675
num_examples: 979109
- name: validation
num_bytes: 271244
num_examples: 2000
download_size: 33247484
dataset_size: 115494662
- config_name: en-sk
features:
- name: translation
dtype:
translation:
languages:
- en
- sk
splits:
- name: test
num_bytes: 258042
num_examples: 2000
- name: train
num_bytes: 111743868
num_examples: 1000000
- name: validation
num_bytes: 255470
num_examples: 2000
download_size: 46618395
dataset_size: 112257380
- config_name: en-sl
features:
- name: translation
dtype:
translation:
languages:
- en
- sl
splits:
- name: test
num_bytes: 205478
num_examples: 2000
- name: train
num_bytes: 90270957
num_examples: 1000000
- name: validation
num_bytes: 198662
num_examples: 2000
download_size: 37536724
dataset_size: 90675097
- config_name: en-sq
features:
- name: translation
dtype:
translation:
languages:
- en
- sq
splits:
- name: test
num_bytes: 275379
num_examples: 2000
- name: train
num_bytes: 105745981
num_examples: 1000000
- name: validation
num_bytes: 267312
num_examples: 2000
download_size: 42697338
dataset_size: 106288672
- config_name: en-sr
features:
- name: translation
dtype:
translation:
languages:
- en
- sr
splits:
- name: test
num_bytes: 180232
num_examples: 2000
- name: train
num_bytes: 75726835
num_examples: 1000000
- name: validation
num_bytes: 184246
num_examples: 2000
download_size: 31260575
dataset_size: 76091313
- config_name: en-sv
features:
- name: translation
dtype:
translation:
languages:
- en
- sv
splits:
- name: test
num_bytes: 271014
num_examples: 2000
- name: train
num_bytes: 116985953
num_examples: 1000000
- name: validation
num_bytes: 279994
num_examples: 2000
download_size: 46694960
dataset_size: 117536961
- config_name: en-ta
features:
- name: translation
dtype:
translation:
languages:
- en
- ta
splits:
- name: test
num_bytes: 351990
num_examples: 2000
- name: train
num_bytes: 74044524
num_examples: 227014
- name: validation
num_bytes: 335557
num_examples: 2000
download_size: 17652443
dataset_size: 74732071
- config_name: en-te
features:
- name: translation
dtype:
translation:
languages:
- en
- te
splits:
- name: test
num_bytes: 190595
num_examples: 2000
- name: train
num_bytes: 6688625
num_examples: 64352
- name: validation
num_bytes: 193666
num_examples: 2000
download_size: 2011832
dataset_size: 7072886
- config_name: en-tg
features:
- name: translation
dtype:
translation:
languages:
- en
- tg
splits:
- name: test
num_bytes: 372120
num_examples: 2000
- name: train
num_bytes: 35477177
num_examples: 193882
- name: validation
num_bytes: 371728
num_examples: 2000
download_size: 11389877
dataset_size: 36221025
- config_name: en-th
features:
- name: translation
dtype:
translation:
languages:
- en
- th
splits:
- name: test
num_bytes: 290581
num_examples: 2000
- name: train
num_bytes: 132821031
num_examples: 1000000
- name: validation
num_bytes: 288366
num_examples: 2000
download_size: 38147204
dataset_size: 133399978
- config_name: en-tk
features:
- name: translation
dtype:
translation:
languages:
- en
- tk
splits:
- name: test
num_bytes: 83886
num_examples: 1852
- name: train
num_bytes: 719633
num_examples: 13110
- name: validation
num_bytes: 81014
num_examples: 1852
download_size: 157481
dataset_size: 884533
- config_name: en-tr
features:
- name: translation
dtype:
translation:
languages:
- en
- tr
splits:
- name: test
num_bytes: 183833
num_examples: 2000
- name: train
num_bytes: 78946365
num_examples: 1000000
- name: validation
num_bytes: 181917
num_examples: 2000
download_size: 30892429
dataset_size: 79312115
- config_name: en-tt
features:
- name: translation
dtype:
translation:
languages:
- en
- tt
splits:
- name: test
num_bytes: 693276
num_examples: 2000
- name: train
num_bytes: 35313258
num_examples: 100843
- name: validation
num_bytes: 701670
num_examples: 2000
download_size: 9940523
dataset_size: 36708204
- config_name: en-ug
features:
- name: translation
dtype:
translation:
languages:
- en
- ug
splits:
- name: test
num_bytes: 620881
num_examples: 2000
- name: train
num_bytes: 31576580
num_examples: 72170
- name: validation
num_bytes: 631236
num_examples: 2000
download_size: 8687743
dataset_size: 32828697
- config_name: en-uk
features:
- name: translation
dtype:
translation:
languages:
- en
- uk
splits:
- name: test
num_bytes: 249750
num_examples: 2000
- name: train
num_bytes: 104230356
num_examples: 1000000
- name: validation
num_bytes: 247131
num_examples: 2000
download_size: 37415496
dataset_size: 104727237
- config_name: en-ur
features:
- name: translation
dtype:
translation:
languages:
- en
- ur
splits:
- name: test
num_bytes: 538564
num_examples: 2000
- name: train
num_bytes: 268961304
num_examples: 753913
- name: validation
num_bytes: 529316
num_examples: 2000
download_size: 81092186
dataset_size: 270029184
- config_name: en-uz
features:
- name: translation
dtype:
translation:
languages:
- en
- uz
splits:
- name: test
num_bytes: 408683
num_examples: 2000
- name: train
num_bytes: 38375434
num_examples: 173157
- name: validation
num_bytes: 398861
num_examples: 2000
download_size: 11791643
dataset_size: 39182978
- config_name: en-vi
features:
- name: translation
dtype:
translation:
languages:
- en
- vi
splits:
- name: test
num_bytes: 192752
num_examples: 2000
- name: train
num_bytes: 82615270
num_examples: 1000000
- name: validation
num_bytes: 194729
num_examples: 2000
download_size: 30647296
dataset_size: 83002751
- config_name: en-wa
features:
- name: translation
dtype:
translation:
languages:
- en
- wa
splits:
- name: test
num_bytes: 87099
num_examples: 2000
- name: train
num_bytes: 6085948
num_examples: 104496
- name: validation
num_bytes: 87726
num_examples: 2000
download_size: 2119821
dataset_size: 6260773
- config_name: en-xh
features:
- name: translation
dtype:
translation:
languages:
- en
- xh
splits:
- name: test
num_bytes: 318660
num_examples: 2000
- name: train
num_bytes: 50607248
num_examples: 439671
- name: validation
num_bytes: 315839
num_examples: 2000
download_size: 20503199
dataset_size: 51241747
- config_name: en-yi
features:
- name: translation
dtype:
translation:
languages:
- en
- yi
splits:
- name: test
num_bytes: 96490
num_examples: 2000
- name: train
num_bytes: 1275143
num_examples: 15010
- name: validation
num_bytes: 99826
num_examples: 2000
download_size: 284031
dataset_size: 1471459
- config_name: en-yo
features:
- name: translation
dtype:
translation:
languages:
- en
- yo
splits:
- name: train
num_bytes: 979769
num_examples: 10375
download_size: 177540
dataset_size: 979769
- config_name: en-zh
features:
- name: translation
dtype:
translation:
languages:
- en
- zh
splits:
- name: test
num_bytes: 511372
num_examples: 2000
- name: train
num_bytes: 200062983
num_examples: 1000000
- name: validation
num_bytes: 512364
num_examples: 2000
download_size: 83265500
dataset_size: 201086719
- config_name: en-zu
features:
- name: translation
dtype:
translation:
languages:
- en
- zu
splits:
- name: test
num_bytes: 117518
num_examples: 2000
- name: train
num_bytes: 2799590
num_examples: 38616
- name: validation
num_bytes: 120141
num_examples: 2000
download_size: 889951
dataset_size: 3037249
- config_name: ar-de
features:
- name: translation
dtype:
translation:
languages:
- ar
- de
splits:
- name: test
num_bytes: 238599
num_examples: 2000
download_size: 2556791
dataset_size: 238599
- config_name: ar-fr
features:
- name: translation
dtype:
translation:
languages:
- ar
- fr
splits:
- name: test
num_bytes: 547382
num_examples: 2000
download_size: 2556791
dataset_size: 547382
- config_name: ar-nl
features:
- name: translation
dtype:
translation:
languages:
- ar
- nl
splits:
- name: test
num_bytes: 212936
num_examples: 2000
download_size: 2556791
dataset_size: 212936
- config_name: ar-ru
features:
- name: translation
dtype:
translation:
languages:
- ar
- ru
splits:
- name: test
num_bytes: 808270
num_examples: 2000
download_size: 2556791
dataset_size: 808270
- config_name: ar-zh
features:
- name: translation
dtype:
translation:
languages:
- ar
- zh
splits:
- name: test
num_bytes: 713412
num_examples: 2000
download_size: 2556791
dataset_size: 713412
- config_name: de-fr
features:
- name: translation
dtype:
translation:
languages:
- de
- fr
splits:
- name: test
num_bytes: 458746
num_examples: 2000
download_size: 2556791
dataset_size: 458746
- config_name: de-nl
features:
- name: translation
dtype:
translation:
languages:
- de
- nl
splits:
- name: test
num_bytes: 403886
num_examples: 2000
download_size: 2556791
dataset_size: 403886
- config_name: de-ru
features:
- name: translation
dtype:
translation:
languages:
- de
- ru
splits:
- name: test
num_bytes: 315779
num_examples: 2000
download_size: 2556791
dataset_size: 315779
- config_name: de-zh
features:
- name: translation
dtype:
translation:
languages:
- de
- zh
splits:
- name: test
num_bytes: 280397
num_examples: 2000
download_size: 2556791
dataset_size: 280397
- config_name: fr-nl
features:
- name: translation
dtype:
translation:
languages:
- fr
- nl
splits:
- name: test
num_bytes: 368646
num_examples: 2000
download_size: 2556791
dataset_size: 368646
- config_name: fr-ru
features:
- name: translation
dtype:
translation:
languages:
- fr
- ru
splits:
- name: test
num_bytes: 732724
num_examples: 2000
download_size: 2556791
dataset_size: 732724
- config_name: fr-zh
features:
- name: translation
dtype:
translation:
languages:
- fr
- zh
splits:
- name: test
num_bytes: 619394
num_examples: 2000
download_size: 2556791
dataset_size: 619394
- config_name: nl-ru
features:
- name: translation
dtype:
translation:
languages:
- nl
- ru
splits:
- name: test
num_bytes: 256067
num_examples: 2000
download_size: 2556791
dataset_size: 256067
- config_name: nl-zh
features:
- name: translation
dtype:
translation:
languages:
- nl
- zh
splits:
- name: test
num_bytes: 183641
num_examples: 2000
download_size: 2556791
dataset_size: 183641
- config_name: ru-zh
features:
- name: translation
dtype:
translation:
languages:
- ru
- zh
splits:
- name: test
num_bytes: 916114
num_examples: 2000
download_size: 2556791
dataset_size: 916114
---
# Dataset Card for Opus100
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Link](http://opus.nlpl.eu/opus-100.php)
- **Repository:** [GitHub](https://github.com/EdinburghNLP/opus-100-corpus)
- **Paper:** [ARXIV](https://arxiv.org/abs/2004.11867)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
OPUS-100 is English-centric, meaning that all training pairs include English on either the source or target side. The corpus covers 100 languages (including English). Selected the languages based on the volume of parallel data available in OPUS.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
OPUS-100 contains approximately 55M sentence pairs. Of the 99 language pairs, 44 have 1M sentence pairs of training data, 73 have at least 100k, and 95 have at least 10k.
## Dataset Structure
### Data Instances
```
{
"ca": "El departament de bombers té el seu propi equip d'investigació.",
"en": "Well, the fire department has its own investigative unit."
}
```
### Data Fields
- `src_tag`: `string` text in source language
- `tgt_tag`: `string` translation of source language in target language
### Data Splits
The dataset is split into training, development, and test portions. Data was prepared by randomly sampled up to 1M sentence pairs per language pair for training and up to 2000 each for development and test. To ensure that there was no overlap (at the monolingual sentence level) between the training and development/test data, they applied a filter during sampling to exclude sentences that had already been sampled. Note that this was done cross-lingually so that, for instance, an English sentence in the Portuguese-English portion of the training data could not occur in the Hindi-English test set.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{zhang2020improving,
title={Improving Massively Multilingual Neural Machine Translation and Zero-Shot Translation},
author={Biao Zhang and Philip Williams and Ivan Titov and Rico Sennrich},
year={2020},
eprint={2004.11867},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@vasudevgupta7](https://github.com/vasudevgupta7) for adding this dataset. |
opus_books | ---
annotations_creators:
- found
language_creators:
- found
language:
- ca
- de
- el
- en
- eo
- es
- fi
- fr
- hu
- it
- nl
- 'no'
- pl
- pt
- ru
- sv
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: OpusBooks
dataset_info:
- config_name: ca-de
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ca
- de
splits:
- name: train
num_bytes: 899565
num_examples: 4445
download_size: 349126
dataset_size: 899565
- config_name: ca-en
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ca
- en
splits:
- name: train
num_bytes: 863174
num_examples: 4605
download_size: 336276
dataset_size: 863174
- config_name: de-en
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- en
splits:
- name: train
num_bytes: 13739047
num_examples: 51467
download_size: 5124458
dataset_size: 13739047
- config_name: el-en
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- el
- en
splits:
- name: train
num_bytes: 552579
num_examples: 1285
download_size: 175537
dataset_size: 552579
- config_name: de-eo
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- eo
splits:
- name: train
num_bytes: 398885
num_examples: 1363
download_size: 150822
dataset_size: 398885
- config_name: en-eo
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- eo
splits:
- name: train
num_bytes: 386231
num_examples: 1562
download_size: 145339
dataset_size: 386231
- config_name: de-es
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- es
splits:
- name: train
num_bytes: 7592487
num_examples: 27526
download_size: 2802010
dataset_size: 7592487
- config_name: el-es
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- el
- es
splits:
- name: train
num_bytes: 527991
num_examples: 1096
download_size: 168306
dataset_size: 527991
- config_name: en-es
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- es
splits:
- name: train
num_bytes: 25291783
num_examples: 93470
download_size: 9257150
dataset_size: 25291783
- config_name: eo-es
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- eo
- es
splits:
- name: train
num_bytes: 409591
num_examples: 1677
download_size: 154950
dataset_size: 409591
- config_name: en-fi
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- fi
splits:
- name: train
num_bytes: 715039
num_examples: 3645
download_size: 266714
dataset_size: 715039
- config_name: es-fi
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- fi
splits:
- name: train
num_bytes: 710462
num_examples: 3344
download_size: 264316
dataset_size: 710462
- config_name: de-fr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- fr
splits:
- name: train
num_bytes: 9544399
num_examples: 34916
download_size: 3556168
dataset_size: 9544399
- config_name: el-fr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- el
- fr
splits:
- name: train
num_bytes: 539933
num_examples: 1237
download_size: 169241
dataset_size: 539933
- config_name: en-fr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- fr
splits:
- name: train
num_bytes: 32997199
num_examples: 127085
download_size: 12009501
dataset_size: 32997199
- config_name: eo-fr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- eo
- fr
splits:
- name: train
num_bytes: 412999
num_examples: 1588
download_size: 152040
dataset_size: 412999
- config_name: es-fr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- fr
splits:
- name: train
num_bytes: 14382198
num_examples: 56319
download_size: 5203099
dataset_size: 14382198
- config_name: fi-fr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fi
- fr
splits:
- name: train
num_bytes: 746097
num_examples: 3537
download_size: 276633
dataset_size: 746097
- config_name: ca-hu
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ca
- hu
splits:
- name: train
num_bytes: 886162
num_examples: 4463
download_size: 346425
dataset_size: 886162
- config_name: de-hu
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- hu
splits:
- name: train
num_bytes: 13515043
num_examples: 51780
download_size: 5069455
dataset_size: 13515043
- config_name: el-hu
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- el
- hu
splits:
- name: train
num_bytes: 546290
num_examples: 1090
download_size: 176715
dataset_size: 546290
- config_name: en-hu
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- hu
splits:
- name: train
num_bytes: 35256934
num_examples: 137151
download_size: 13232578
dataset_size: 35256934
- config_name: eo-hu
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- eo
- hu
splits:
- name: train
num_bytes: 389112
num_examples: 1636
download_size: 151332
dataset_size: 389112
- config_name: fr-hu
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fr
- hu
splits:
- name: train
num_bytes: 22483133
num_examples: 89337
download_size: 8328639
dataset_size: 22483133
- config_name: de-it
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- it
splits:
- name: train
num_bytes: 7760020
num_examples: 27381
download_size: 2811066
dataset_size: 7760020
- config_name: en-it
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- it
splits:
- name: train
num_bytes: 8993803
num_examples: 32332
download_size: 3295251
dataset_size: 8993803
- config_name: eo-it
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- eo
- it
splits:
- name: train
num_bytes: 387606
num_examples: 1453
download_size: 146899
dataset_size: 387606
- config_name: es-it
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- it
splits:
- name: train
num_bytes: 7837703
num_examples: 28868
download_size: 2864028
dataset_size: 7837703
- config_name: fr-it
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fr
- it
splits:
- name: train
num_bytes: 4752171
num_examples: 14692
download_size: 1737670
dataset_size: 4752171
- config_name: hu-it
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- hu
- it
splits:
- name: train
num_bytes: 8445585
num_examples: 30949
download_size: 3101681
dataset_size: 8445585
- config_name: ca-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ca
- nl
splits:
- name: train
num_bytes: 884823
num_examples: 4329
download_size: 340308
dataset_size: 884823
- config_name: de-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- nl
splits:
- name: train
num_bytes: 3561764
num_examples: 15622
download_size: 1325189
dataset_size: 3561764
- config_name: en-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- nl
splits:
- name: train
num_bytes: 10278038
num_examples: 38652
download_size: 3727995
dataset_size: 10278038
- config_name: es-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- nl
splits:
- name: train
num_bytes: 9062389
num_examples: 32247
download_size: 3245558
dataset_size: 9062389
- config_name: fr-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fr
- nl
splits:
- name: train
num_bytes: 10408148
num_examples: 40017
download_size: 3720151
dataset_size: 10408148
- config_name: hu-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- hu
- nl
splits:
- name: train
num_bytes: 10814173
num_examples: 43428
download_size: 3998988
dataset_size: 10814173
- config_name: it-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- it
- nl
splits:
- name: train
num_bytes: 1328305
num_examples: 2359
download_size: 476875
dataset_size: 1328305
- config_name: en-no
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- 'no'
splits:
- name: train
num_bytes: 661978
num_examples: 3499
download_size: 246977
dataset_size: 661978
- config_name: es-no
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- 'no'
splits:
- name: train
num_bytes: 729125
num_examples: 3585
download_size: 270796
dataset_size: 729125
- config_name: fi-no
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fi
- 'no'
splits:
- name: train
num_bytes: 691181
num_examples: 3414
download_size: 256267
dataset_size: 691181
- config_name: fr-no
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fr
- 'no'
splits:
- name: train
num_bytes: 692786
num_examples: 3449
download_size: 256501
dataset_size: 692786
- config_name: hu-no
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- hu
- 'no'
splits:
- name: train
num_bytes: 695497
num_examples: 3410
download_size: 267047
dataset_size: 695497
- config_name: en-pl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- pl
splits:
- name: train
num_bytes: 583091
num_examples: 2831
download_size: 226855
dataset_size: 583091
- config_name: fi-pl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fi
- pl
splits:
- name: train
num_bytes: 613791
num_examples: 2814
download_size: 236123
dataset_size: 613791
- config_name: fr-pl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fr
- pl
splits:
- name: train
num_bytes: 614248
num_examples: 2825
download_size: 235905
dataset_size: 614248
- config_name: hu-pl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- hu
- pl
splits:
- name: train
num_bytes: 616161
num_examples: 2859
download_size: 245670
dataset_size: 616161
- config_name: de-pt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- pt
splits:
- name: train
num_bytes: 317155
num_examples: 1102
download_size: 116319
dataset_size: 317155
- config_name: en-pt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- pt
splits:
- name: train
num_bytes: 309689
num_examples: 1404
download_size: 111837
dataset_size: 309689
- config_name: eo-pt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- eo
- pt
splits:
- name: train
num_bytes: 311079
num_examples: 1259
download_size: 116157
dataset_size: 311079
- config_name: es-pt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- pt
splits:
- name: train
num_bytes: 326884
num_examples: 1327
download_size: 120549
dataset_size: 326884
- config_name: fr-pt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fr
- pt
splits:
- name: train
num_bytes: 324616
num_examples: 1263
download_size: 115920
dataset_size: 324616
- config_name: hu-pt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- hu
- pt
splits:
- name: train
num_bytes: 302972
num_examples: 1184
download_size: 115002
dataset_size: 302972
- config_name: it-pt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- it
- pt
splits:
- name: train
num_bytes: 301428
num_examples: 1163
download_size: 111050
dataset_size: 301428
- config_name: de-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- ru
splits:
- name: train
num_bytes: 5764673
num_examples: 17373
download_size: 1799371
dataset_size: 5764673
- config_name: en-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- ru
splits:
- name: train
num_bytes: 5190880
num_examples: 17496
download_size: 1613419
dataset_size: 5190880
- config_name: es-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- ru
splits:
- name: train
num_bytes: 5281130
num_examples: 16793
download_size: 1648606
dataset_size: 5281130
- config_name: fr-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fr
- ru
splits:
- name: train
num_bytes: 2474210
num_examples: 8197
download_size: 790541
dataset_size: 2474210
- config_name: hu-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- hu
- ru
splits:
- name: train
num_bytes: 7818688
num_examples: 26127
download_size: 2469765
dataset_size: 7818688
- config_name: it-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- it
- ru
splits:
- name: train
num_bytes: 5316952
num_examples: 17906
download_size: 1620478
dataset_size: 5316952
- config_name: en-sv
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- sv
splits:
- name: train
num_bytes: 790785
num_examples: 3095
download_size: 304975
dataset_size: 790785
- config_name: fr-sv
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fr
- sv
splits:
- name: train
num_bytes: 833553
num_examples: 3002
download_size: 321660
dataset_size: 833553
- config_name: it-sv
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- it
- sv
splits:
- name: train
num_bytes: 811413
num_examples: 2998
download_size: 307821
dataset_size: 811413
---
# Dataset Card for OpusBooks
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/Books.php
- **Repository:** None
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
Here are some examples of questions and facts:
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. |
opus_dgt | ---
annotations_creators:
- found
language_creators:
- found
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sh
- sk
- sl
- sv
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
- 10K<n<100K
- 1M<n<10M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: OpusDgt
configs:
- bg-ga
- bg-hr
- bg-sh
- es-ga
- fi-ga
- ga-nl
- ga-sh
- hr-sk
- hr-sv
- mt-sh
dataset_info:
- config_name: bg-ga
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- bg
- ga
splits:
- name: train
num_bytes: 82972428
num_examples: 179142
download_size: 15935979
dataset_size: 82972428
- config_name: bg-hr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- bg
- hr
splits:
- name: train
num_bytes: 239828651
num_examples: 701572
download_size: 46804111
dataset_size: 239828651
- config_name: bg-sh
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- bg
- sh
splits:
- name: train
num_bytes: 498884905
num_examples: 1488507
download_size: 97402723
dataset_size: 498884905
- config_name: fi-ga
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fi
- ga
splits:
- name: train
num_bytes: 61313136
num_examples: 178619
download_size: 14385114
dataset_size: 61313136
- config_name: es-ga
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- ga
splits:
- name: train
num_bytes: 63115666
num_examples: 178696
download_size: 14447359
dataset_size: 63115666
- config_name: ga-sh
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ga
- sh
splits:
- name: train
num_bytes: 28666585
num_examples: 91613
download_size: 6963357
dataset_size: 28666585
- config_name: hr-sk
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- hr
- sk
splits:
- name: train
num_bytes: 170718371
num_examples: 689263
download_size: 42579941
dataset_size: 170718371
- config_name: mt-sh
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- mt
- sh
splits:
- name: train
num_bytes: 368562443
num_examples: 1450424
download_size: 88598048
dataset_size: 368562443
- config_name: hr-sv
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- hr
- sv
splits:
- name: train
num_bytes: 171858392
num_examples: 696334
download_size: 41410203
dataset_size: 171858392
- config_name: ga-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ga
- nl
splits:
- name: train
num_bytes: 59065574
num_examples: 170644
download_size: 13730934
dataset_size: 59065574
---
# Dataset Card for OpusDgt
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/DGT.php
- **Repository:** None
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
A collection of translation memories provided by the Joint Research Centre (JRC) Directorate-General for Translation (DGT): https://ec.europa.eu/jrc/en/language-technologies/dgt-translation-memory
Tha dataset contains 25 languages and 299 bitexts.
To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs,
e.g.
```python
dataset = load_dataset("opus_dgt", lang1="it", lang2="pl")
```
You can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/DGT.php
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The languages in the dataset are:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sh
- sk
- sl
- sv
## Dataset Structure
### Data Instances
```
{
'id': '0',
'translation': {
"bg": "Протокол за поправка на Конвенцията относно компетентността, признаването и изпълнението на съдебни решения по граждански и търговски дела, подписана в Лугано на 30 октомври 2007 г.",
"ga": "Miontuairisc cheartaitheach maidir le Coinbhinsiún ar dhlínse agus ar aithint agus ar fhorghníomhú breithiúnas in ábhair shibhialta agus tráchtála, a siníodh in Lugano an 30 Deireadh Fómhair 2007"
}
}
```
### Data Fields
- `id` (`str`): Unique identifier of the parallel sentence for the pair of languages.
- `translation` (`dict`): Parallel sentences for the pair of languages.
### Data Splits
The dataset contains a single `train` split.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```bibtex
@InProceedings{TIEDEMANN12.463,
author = {J{\"o}rg Tiedemann},
title = {Parallel Data, Tools and Interfaces in OPUS},
booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
year = {2012},
month = {may},
date = {23-25},
address = {Istanbul, Turkey},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
isbn = {978-2-9517408-7-7},
language = {english}
}
```
### Contributions
Thanks to [@rkc007](https://github.com/rkc007) for adding this dataset. |
opus_dogc | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- ca
- es
license:
- cc0-1.0
multilinguality:
- translation
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: OPUS DOGC
dataset_info:
features:
- name: translation
dtype:
translation:
languages:
- ca
- es
config_name: tmx
splits:
- name: train
num_bytes: 1258924464
num_examples: 4763575
download_size: 331724078
dataset_size: 1258924464
---
# Dataset Card for OPUS DOGC
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/DOGC.php
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
OPUS DOGC is a collection of documents from the Official Journal of the Government of Catalonia, in Catalan and Spanish languages, provided by Antoni Oliver Gonzalez from the Universitat Oberta de Catalunya.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Dataset is multilingual with parallel text in:
- Catalan
- Spanish
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
A data instance contains the following fields:
- `ca`: the Catalan text
- `es`: the aligned Spanish text
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Dataset is in the Public Domain under [CC0 1.0](https://creativecommons.org/publicdomain/zero/1.0/).
### Citation Information
```
@inproceedings{tiedemann-2012-parallel,
title = "Parallel Data, Tools and Interfaces in {OPUS}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}'12)",
month = may,
year = "2012",
address = "Istanbul, Turkey",
publisher = "European Language Resources Association (ELRA)",
url = "http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf",
pages = "2214--2218",
abstract = "This paper presents the current status of OPUS, a growing language resource of parallel corpora and related tools. The focus in OPUS is to provide freely available data sets in various formats together with basic annotation to be useful for applications in computational linguistics, translation studies and cross-linguistic corpus studies. In this paper, we report about new data sets and their features, additional annotation tools and models provided from the website and essential interfaces and on-line services included in the project.",
}
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset. |
opus_elhuyar | ---
annotations_creators:
- found
language_creators:
- found
language:
- es
- eu
license:
- unknown
multilinguality:
- translation
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: OpusElhuyar
dataset_info:
features:
- name: translation
dtype:
translation:
languages:
- es
- eu
config_name: es-eu
splits:
- name: train
num_bytes: 127833939
num_examples: 642348
download_size: 44468751
dataset_size: 127833939
---
# Dataset Card for [opus_elhuyar]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[Opus Elhuyar](http://opus.nlpl.eu/Elhuyar.php)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Dataset provided by the foundation Elhuyar (http://webcorpusak.elhuyar.eus/sarrera_paraleloa.html) and submitted to OPUS by Joseba Garcia Beaumont
### Supported Tasks and Leaderboards
The underlying task is machine translation from Spanish to Basque
### Languages
Spanish to Basque
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
### Contributions
Thanks to [@spatil6](https://github.com/spatil6) for adding this dataset. |
opus_euconst | ---
annotations_creators:
- found
language_creators:
- found
language:
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- sk
- sl
- sv
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: OpusEuconst
dataset_info:
- config_name: cs-da
features:
- name: translation
dtype:
translation:
languages:
- cs
- da
splits:
- name: train
num_bytes: 1855320
num_examples: 10554
download_size: 466265
dataset_size: 1855320
- config_name: cs-de
features:
- name: translation
dtype:
translation:
languages:
- cs
- de
splits:
- name: train
num_bytes: 1817185
num_examples: 8844
download_size: 458784
dataset_size: 1817185
- config_name: cs-el
features:
- name: translation
dtype:
translation:
languages:
- cs
- el
splits:
- name: train
num_bytes: 2690312
num_examples: 10072
download_size: 563137
dataset_size: 2690312
- config_name: cs-en
features:
- name: translation
dtype:
translation:
languages:
- cs
- en
splits:
- name: train
num_bytes: 1850952
num_examples: 9954
download_size: 458097
dataset_size: 1850952
- config_name: cs-es
features:
- name: translation
dtype:
translation:
languages:
- cs
- es
splits:
- name: train
num_bytes: 1945318
num_examples: 10023
download_size: 476272
dataset_size: 1945318
- config_name: cs-et
features:
- name: translation
dtype:
translation:
languages:
- cs
- et
splits:
- name: train
num_bytes: 1774485
num_examples: 10037
download_size: 461490
dataset_size: 1774485
- config_name: cs-fi
features:
- name: translation
dtype:
translation:
languages:
- cs
- fi
splits:
- name: train
num_bytes: 1849796
num_examples: 9848
download_size: 466763
dataset_size: 1849796
- config_name: cs-fr
features:
- name: translation
dtype:
translation:
languages:
- cs
- fr
splits:
- name: train
num_bytes: 1919501
num_examples: 10160
download_size: 473256
dataset_size: 1919501
- config_name: cs-ga
features:
- name: translation
dtype:
translation:
languages:
- cs
- ga
splits:
- name: train
num_bytes: 1967636
num_examples: 10126
download_size: 489439
dataset_size: 1967636
- config_name: cs-hu
features:
- name: translation
dtype:
translation:
languages:
- cs
- hu
splits:
- name: train
num_bytes: 1852209
num_examples: 8586
download_size: 463889
dataset_size: 1852209
- config_name: cs-it
features:
- name: translation
dtype:
translation:
languages:
- cs
- it
splits:
- name: train
num_bytes: 1883773
num_examples: 10081
download_size: 469084
dataset_size: 1883773
- config_name: cs-lt
features:
- name: translation
dtype:
translation:
languages:
- cs
- lt
splits:
- name: train
num_bytes: 1789422
num_examples: 10008
download_size: 465951
dataset_size: 1789422
- config_name: cs-lv
features:
- name: translation
dtype:
translation:
languages:
- cs
- lv
splits:
- name: train
num_bytes: 1826174
num_examples: 10144
download_size: 466792
dataset_size: 1826174
- config_name: cs-mt
features:
- name: translation
dtype:
translation:
languages:
- cs
- mt
splits:
- name: train
num_bytes: 1923021
num_examples: 10122
download_size: 481078
dataset_size: 1923021
- config_name: cs-nl
features:
- name: translation
dtype:
translation:
languages:
- cs
- nl
splits:
- name: train
num_bytes: 1928488
num_examples: 10021
download_size: 480011
dataset_size: 1928488
- config_name: cs-pl
features:
- name: translation
dtype:
translation:
languages:
- cs
- pl
splits:
- name: train
num_bytes: 1888546
num_examples: 10029
download_size: 486819
dataset_size: 1888546
- config_name: cs-pt
features:
- name: translation
dtype:
translation:
languages:
- cs
- pt
splits:
- name: train
num_bytes: 1771499
num_examples: 10970
download_size: 445457
dataset_size: 1771499
- config_name: cs-sk
features:
- name: translation
dtype:
translation:
languages:
- cs
- sk
splits:
- name: train
num_bytes: 1875917
num_examples: 10631
download_size: 491941
dataset_size: 1875917
- config_name: cs-sl
features:
- name: translation
dtype:
translation:
languages:
- cs
- sl
splits:
- name: train
num_bytes: 1679335
num_examples: 8860
download_size: 445593
dataset_size: 1679335
- config_name: cs-sv
features:
- name: translation
dtype:
translation:
languages:
- cs
- sv
splits:
- name: train
num_bytes: 1860711
num_examples: 10003
download_size: 469789
dataset_size: 1860711
- config_name: da-de
features:
- name: translation
dtype:
translation:
languages:
- da
- de
splits:
- name: train
num_bytes: 1867126
num_examples: 9001
download_size: 454320
dataset_size: 1867126
- config_name: da-el
features:
- name: translation
dtype:
translation:
languages:
- da
- el
splits:
- name: train
num_bytes: 2764611
num_examples: 10317
download_size: 558957
dataset_size: 2764611
- config_name: da-en
features:
- name: translation
dtype:
translation:
languages:
- da
- en
splits:
- name: train
num_bytes: 1865867
num_examples: 10033
download_size: 442954
dataset_size: 1865867
- config_name: da-es
features:
- name: translation
dtype:
translation:
languages:
- da
- es
splits:
- name: train
num_bytes: 1979057
num_examples: 10227
download_size: 465367
dataset_size: 1979057
- config_name: da-et
features:
- name: translation
dtype:
translation:
languages:
- da
- et
splits:
- name: train
num_bytes: 1802128
num_examples: 10166
download_size: 449125
dataset_size: 1802128
- config_name: da-fi
features:
- name: translation
dtype:
translation:
languages:
- da
- fi
splits:
- name: train
num_bytes: 1932698
num_examples: 10176
download_size: 467143
dataset_size: 1932698
- config_name: da-fr
features:
- name: translation
dtype:
translation:
languages:
- da
- fr
splits:
- name: train
num_bytes: 1966747
num_examples: 10410
download_size: 465562
dataset_size: 1966747
- config_name: da-ga
features:
- name: translation
dtype:
translation:
languages:
- da
- ga
splits:
- name: train
num_bytes: 1996354
num_examples: 10205
download_size: 477823
dataset_size: 1996354
- config_name: da-hu
features:
- name: translation
dtype:
translation:
languages:
- da
- hu
splits:
- name: train
num_bytes: 1880277
num_examples: 8702
download_size: 453417
dataset_size: 1880277
- config_name: da-it
features:
- name: translation
dtype:
translation:
languages:
- da
- it
splits:
- name: train
num_bytes: 1934980
num_examples: 10309
download_size: 461591
dataset_size: 1934980
- config_name: da-lt
features:
- name: translation
dtype:
translation:
languages:
- da
- lt
splits:
- name: train
num_bytes: 1851166
num_examples: 10269
download_size: 461208
dataset_size: 1851166
- config_name: da-lv
features:
- name: translation
dtype:
translation:
languages:
- da
- lv
splits:
- name: train
num_bytes: 1865398
num_examples: 10309
download_size: 457168
dataset_size: 1865398
- config_name: da-mt
features:
- name: translation
dtype:
translation:
languages:
- da
- mt
splits:
- name: train
num_bytes: 1946759
num_examples: 10231
download_size: 467080
dataset_size: 1946759
- config_name: da-nl
features:
- name: translation
dtype:
translation:
languages:
- da
- nl
splits:
- name: train
num_bytes: 1974005
num_examples: 10261
download_size: 471714
dataset_size: 1974005
- config_name: da-pl
features:
- name: translation
dtype:
translation:
languages:
- da
- pl
splits:
- name: train
num_bytes: 1926099
num_examples: 10196
download_size: 476713
dataset_size: 1926099
- config_name: da-pt
features:
- name: translation
dtype:
translation:
languages:
- da
- pt
splits:
- name: train
num_bytes: 1818093
num_examples: 10910
download_size: 435584
dataset_size: 1818093
- config_name: da-sk
features:
- name: translation
dtype:
translation:
languages:
- da
- sk
splits:
- name: train
num_bytes: 1942991
num_examples: 10685
download_size: 486680
dataset_size: 1942991
- config_name: da-sl
features:
- name: translation
dtype:
translation:
languages:
- da
- sl
splits:
- name: train
num_bytes: 1686941
num_examples: 8891
download_size: 430617
dataset_size: 1686941
- config_name: da-sv
features:
- name: translation
dtype:
translation:
languages:
- da
- sv
splits:
- name: train
num_bytes: 1909121
num_examples: 10238
download_size: 462697
dataset_size: 1909121
- config_name: de-el
features:
- name: translation
dtype:
translation:
languages:
- de
- el
splits:
- name: train
num_bytes: 2651162
num_examples: 8865
download_size: 546356
dataset_size: 2651162
- config_name: de-en
features:
- name: translation
dtype:
translation:
languages:
- de
- en
splits:
- name: train
num_bytes: 1898709
num_examples: 8772
download_size: 454470
dataset_size: 1898709
- config_name: de-es
features:
- name: translation
dtype:
translation:
languages:
- de
- es
splits:
- name: train
num_bytes: 1980615
num_examples: 8875
download_size: 468407
dataset_size: 1980615
- config_name: de-et
features:
- name: translation
dtype:
translation:
languages:
- de
- et
splits:
- name: train
num_bytes: 1809098
num_examples: 8764
download_size: 450923
dataset_size: 1809098
- config_name: de-fi
features:
- name: translation
dtype:
translation:
languages:
- de
- fi
splits:
- name: train
num_bytes: 1956123
num_examples: 8894
download_size: 475159
dataset_size: 1956123
- config_name: de-fr
features:
- name: translation
dtype:
translation:
languages:
- de
- fr
splits:
- name: train
num_bytes: 2005979
num_examples: 9068
download_size: 478906
dataset_size: 2005979
- config_name: de-ga
features:
- name: translation
dtype:
translation:
languages:
- de
- ga
splits:
- name: train
num_bytes: 1974968
num_examples: 8803
download_size: 474744
dataset_size: 1974968
- config_name: de-hu
features:
- name: translation
dtype:
translation:
languages:
- de
- hu
splits:
- name: train
num_bytes: 2074611
num_examples: 8651
download_size: 498026
dataset_size: 2074611
- config_name: de-it
features:
- name: translation
dtype:
translation:
languages:
- de
- it
splits:
- name: train
num_bytes: 1967686
num_examples: 9044
download_size: 473160
dataset_size: 1967686
- config_name: de-lt
features:
- name: translation
dtype:
translation:
languages:
- de
- lt
splits:
- name: train
num_bytes: 1870207
num_examples: 8957
download_size: 466161
dataset_size: 1870207
- config_name: de-lv
features:
- name: translation
dtype:
translation:
languages:
- de
- lv
splits:
- name: train
num_bytes: 1858944
num_examples: 8885
download_size: 457176
dataset_size: 1858944
- config_name: de-mt
features:
- name: translation
dtype:
translation:
languages:
- de
- mt
splits:
- name: train
num_bytes: 1944735
num_examples: 8882
download_size: 468892
dataset_size: 1944735
- config_name: de-nl
features:
- name: translation
dtype:
translation:
languages:
- de
- nl
splits:
- name: train
num_bytes: 1985168
num_examples: 8938
download_size: 476619
dataset_size: 1985168
- config_name: de-pl
features:
- name: translation
dtype:
translation:
languages:
- de
- pl
splits:
- name: train
num_bytes: 1926141
num_examples: 8866
download_size: 477047
dataset_size: 1926141
- config_name: de-pt
features:
- name: translation
dtype:
translation:
languages:
- de
- pt
splits:
- name: train
num_bytes: 1758881
num_examples: 8963
download_size: 428306
dataset_size: 1758881
- config_name: de-sk
features:
- name: translation
dtype:
translation:
languages:
- de
- sk
splits:
- name: train
num_bytes: 1881942
num_examples: 9033
download_size: 475699
dataset_size: 1881942
- config_name: de-sl
features:
- name: translation
dtype:
translation:
languages:
- de
- sl
splits:
- name: train
num_bytes: 1857168
num_examples: 8713
download_size: 469339
dataset_size: 1857168
- config_name: de-sv
features:
- name: translation
dtype:
translation:
languages:
- de
- sv
splits:
- name: train
num_bytes: 1920145
num_examples: 8860
download_size: 467214
dataset_size: 1920145
- config_name: el-en
features:
- name: translation
dtype:
translation:
languages:
- el
- en
splits:
- name: train
num_bytes: 2727019
num_examples: 9991
download_size: 546453
dataset_size: 2727019
- config_name: el-es
features:
- name: translation
dtype:
translation:
languages:
- el
- es
splits:
- name: train
num_bytes: 2908150
num_examples: 10284
download_size: 581166
dataset_size: 2908150
- config_name: el-et
features:
- name: translation
dtype:
translation:
languages:
- el
- et
splits:
- name: train
num_bytes: 2714890
num_examples: 10173
download_size: 561207
dataset_size: 2714890
- config_name: el-fi
features:
- name: translation
dtype:
translation:
languages:
- el
- fi
splits:
- name: train
num_bytes: 2800083
num_examples: 10056
download_size: 569734
dataset_size: 2800083
- config_name: el-fr
features:
- name: translation
dtype:
translation:
languages:
- el
- fr
splits:
- name: train
num_bytes: 2875630
num_examples: 10315
download_size: 576084
dataset_size: 2875630
- config_name: el-ga
features:
- name: translation
dtype:
translation:
languages:
- el
- ga
splits:
- name: train
num_bytes: 2861213
num_examples: 10094
download_size: 578923
dataset_size: 2861213
- config_name: el-hu
features:
- name: translation
dtype:
translation:
languages:
- el
- hu
splits:
- name: train
num_bytes: 2679793
num_examples: 8745
download_size: 554539
dataset_size: 2679793
- config_name: el-it
features:
- name: translation
dtype:
translation:
languages:
- el
- it
splits:
- name: train
num_bytes: 2851766
num_examples: 10303
download_size: 574504
dataset_size: 2851766
- config_name: el-lt
features:
- name: translation
dtype:
translation:
languages:
- el
- lt
splits:
- name: train
num_bytes: 2754253
num_examples: 10208
download_size: 571640
dataset_size: 2754253
- config_name: el-lv
features:
- name: translation
dtype:
translation:
languages:
- el
- lv
splits:
- name: train
num_bytes: 2733681
num_examples: 10146
download_size: 559029
dataset_size: 2733681
- config_name: el-mt
features:
- name: translation
dtype:
translation:
languages:
- el
- mt
splits:
- name: train
num_bytes: 2873683
num_examples: 10277
download_size: 581386
dataset_size: 2873683
- config_name: el-nl
features:
- name: translation
dtype:
translation:
languages:
- el
- nl
splits:
- name: train
num_bytes: 2901506
num_examples: 10304
download_size: 587010
dataset_size: 2901506
- config_name: el-pl
features:
- name: translation
dtype:
translation:
languages:
- el
- pl
splits:
- name: train
num_bytes: 2851286
num_examples: 10250
download_size: 591841
dataset_size: 2851286
- config_name: el-pt
features:
- name: translation
dtype:
translation:
languages:
- el
- pt
splits:
- name: train
num_bytes: 2578565
num_examples: 10102
download_size: 519256
dataset_size: 2578565
- config_name: el-sk
features:
- name: translation
dtype:
translation:
languages:
- el
- sk
splits:
- name: train
num_bytes: 2790905
num_examples: 10332
download_size: 584816
dataset_size: 2790905
- config_name: el-sl
features:
- name: translation
dtype:
translation:
languages:
- el
- sl
splits:
- name: train
num_bytes: 2467857
num_examples: 8852
download_size: 524469
dataset_size: 2467857
- config_name: el-sv
features:
- name: translation
dtype:
translation:
languages:
- el
- sv
splits:
- name: train
num_bytes: 2790303
num_examples: 10114
download_size: 568571
dataset_size: 2790303
- config_name: en-es
features:
- name: translation
dtype:
translation:
languages:
- en
- es
splits:
- name: train
num_bytes: 2043033
num_examples: 10040
download_size: 470962
dataset_size: 2043033
- config_name: en-et
features:
- name: translation
dtype:
translation:
languages:
- en
- et
splits:
- name: train
num_bytes: 1879535
num_examples: 10087
download_size: 456941
dataset_size: 1879535
- config_name: en-fi
features:
- name: translation
dtype:
translation:
languages:
- en
- fi
splits:
- name: train
num_bytes: 1994869
num_examples: 10027
download_size: 471936
dataset_size: 1994869
- config_name: en-fr
features:
- name: translation
dtype:
translation:
languages:
- en
- fr
splits:
- name: train
num_bytes: 2013987
num_examples: 10104
download_size: 468914
dataset_size: 2013987
- config_name: en-ga
features:
- name: translation
dtype:
translation:
languages:
- en
- ga
splits:
- name: train
num_bytes: 2040647
num_examples: 10028
download_size: 479083
dataset_size: 2040647
- config_name: en-hu
features:
- name: translation
dtype:
translation:
languages:
- en
- hu
splits:
- name: train
num_bytes: 1981043
num_examples: 8749
download_size: 469127
dataset_size: 1981043
- config_name: en-it
features:
- name: translation
dtype:
translation:
languages:
- en
- it
splits:
- name: train
num_bytes: 1979428
num_examples: 10073
download_size: 464322
dataset_size: 1979428
- config_name: en-lt
features:
- name: translation
dtype:
translation:
languages:
- en
- lt
splits:
- name: train
num_bytes: 1924565
num_examples: 10172
download_size: 469369
dataset_size: 1924565
- config_name: en-lv
features:
- name: translation
dtype:
translation:
languages:
- en
- lv
splits:
- name: train
num_bytes: 1892514
num_examples: 10037
download_size: 453926
dataset_size: 1892514
- config_name: en-mt
features:
- name: translation
dtype:
translation:
languages:
- en
- mt
splits:
- name: train
num_bytes: 2013738
num_examples: 10121
download_size: 473914
dataset_size: 2013738
- config_name: en-nl
features:
- name: translation
dtype:
translation:
languages:
- en
- nl
splits:
- name: train
num_bytes: 2015360
num_examples: 10033
download_size: 472615
dataset_size: 2015360
- config_name: en-pl
features:
- name: translation
dtype:
translation:
languages:
- en
- pl
splits:
- name: train
num_bytes: 1975332
num_examples: 9938
download_size: 479851
dataset_size: 1975332
- config_name: en-pt
features:
- name: translation
dtype:
translation:
languages:
- en
- pt
splits:
- name: train
num_bytes: 1769022
num_examples: 9990
download_size: 419579
dataset_size: 1769022
- config_name: en-sk
features:
- name: translation
dtype:
translation:
languages:
- en
- sk
splits:
- name: train
num_bytes: 1912246
num_examples: 10120
download_size: 473226
dataset_size: 1912246
- config_name: en-sl
features:
- name: translation
dtype:
translation:
languages:
- en
- sl
splits:
- name: train
num_bytes: 1752898
num_examples: 8808
download_size: 438356
dataset_size: 1752898
- config_name: en-sv
features:
- name: translation
dtype:
translation:
languages:
- en
- sv
splits:
- name: train
num_bytes: 1951529
num_examples: 9955
download_size: 463451
dataset_size: 1951529
- config_name: es-et
features:
- name: translation
dtype:
translation:
languages:
- es
- et
splits:
- name: train
num_bytes: 1983166
num_examples: 10191
download_size: 477890
dataset_size: 1983166
- config_name: es-fi
features:
- name: translation
dtype:
translation:
languages:
- es
- fi
splits:
- name: train
num_bytes: 2083093
num_examples: 10121
download_size: 489039
dataset_size: 2083093
- config_name: es-fr
features:
- name: translation
dtype:
translation:
languages:
- es
- fr
splits:
- name: train
num_bytes: 2148462
num_examples: 10420
download_size: 493475
dataset_size: 2148462
- config_name: es-ga
features:
- name: translation
dtype:
translation:
languages:
- es
- ga
splits:
- name: train
num_bytes: 2144567
num_examples: 10147
download_size: 499793
dataset_size: 2144567
- config_name: es-hu
features:
- name: translation
dtype:
translation:
languages:
- es
- hu
splits:
- name: train
num_bytes: 2051889
num_examples: 8760
download_size: 481598
dataset_size: 2051889
- config_name: es-it
features:
- name: translation
dtype:
translation:
languages:
- es
- it
splits:
- name: train
num_bytes: 2108065
num_examples: 10336
download_size: 488520
dataset_size: 2108065
- config_name: es-lt
features:
- name: translation
dtype:
translation:
languages:
- es
- lt
splits:
- name: train
num_bytes: 2020084
num_examples: 10297
download_size: 487664
dataset_size: 2020084
- config_name: es-lv
features:
- name: translation
dtype:
translation:
languages:
- es
- lv
splits:
- name: train
num_bytes: 2007758
num_examples: 10218
download_size: 477478
dataset_size: 2007758
- config_name: es-mt
features:
- name: translation
dtype:
translation:
languages:
- es
- mt
splits:
- name: train
num_bytes: 2125254
num_examples: 10270
download_size: 495721
dataset_size: 2125254
- config_name: es-nl
features:
- name: translation
dtype:
translation:
languages:
- es
- nl
splits:
- name: train
num_bytes: 2156944
num_examples: 10331
download_size: 501762
dataset_size: 2156944
- config_name: es-pl
features:
- name: translation
dtype:
translation:
languages:
- es
- pl
splits:
- name: train
num_bytes: 2105006
num_examples: 10228
download_size: 505622
dataset_size: 2105006
- config_name: es-pt
features:
- name: translation
dtype:
translation:
languages:
- es
- pt
splits:
- name: train
num_bytes: 1885530
num_examples: 10186
download_size: 440336
dataset_size: 1885530
- config_name: es-sk
features:
- name: translation
dtype:
translation:
languages:
- es
- sk
splits:
- name: train
num_bytes: 2026484
num_examples: 10322
download_size: 496375
dataset_size: 2026484
- config_name: es-sl
features:
- name: translation
dtype:
translation:
languages:
- es
- sl
splits:
- name: train
num_bytes: 1833574
num_examples: 8904
download_size: 453761
dataset_size: 1833574
- config_name: es-sv
features:
- name: translation
dtype:
translation:
languages:
- es
- sv
splits:
- name: train
num_bytes: 2074677
num_examples: 10215
download_size: 487779
dataset_size: 2074677
- config_name: et-fi
features:
- name: translation
dtype:
translation:
languages:
- et
- fi
splits:
- name: train
num_bytes: 1807030
num_examples: 9707
download_size: 450723
dataset_size: 1807030
- config_name: et-fr
features:
- name: translation
dtype:
translation:
languages:
- et
- fr
splits:
- name: train
num_bytes: 1943121
num_examples: 10221
download_size: 471593
dataset_size: 1943121
- config_name: et-ga
features:
- name: translation
dtype:
translation:
languages:
- et
- ga
splits:
- name: train
num_bytes: 1982968
num_examples: 10159
download_size: 486167
dataset_size: 1982968
- config_name: et-hu
features:
- name: translation
dtype:
translation:
languages:
- et
- hu
splits:
- name: train
num_bytes: 1898818
num_examples: 8872
download_size: 467740
dataset_size: 1898818
- config_name: et-it
features:
- name: translation
dtype:
translation:
languages:
- et
- it
splits:
- name: train
num_bytes: 1915669
num_examples: 10198
download_size: 468808
dataset_size: 1915669
- config_name: et-lt
features:
- name: translation
dtype:
translation:
languages:
- et
- lt
splits:
- name: train
num_bytes: 1777705
num_examples: 10015
download_size: 457284
dataset_size: 1777705
- config_name: et-lv
features:
- name: translation
dtype:
translation:
languages:
- et
- lv
splits:
- name: train
num_bytes: 1848536
num_examples: 10379
download_size: 464752
dataset_size: 1848536
- config_name: et-mt
features:
- name: translation
dtype:
translation:
languages:
- et
- mt
splits:
- name: train
num_bytes: 1957911
num_examples: 10278
download_size: 481481
dataset_size: 1957911
- config_name: et-nl
features:
- name: translation
dtype:
translation:
languages:
- et
- nl
splits:
- name: train
num_bytes: 1967844
num_examples: 10196
download_size: 482333
dataset_size: 1967844
- config_name: et-pl
features:
- name: translation
dtype:
translation:
languages:
- et
- pl
splits:
- name: train
num_bytes: 1932983
num_examples: 10194
download_size: 489907
dataset_size: 1932983
- config_name: et-pt
features:
- name: translation
dtype:
translation:
languages:
- et
- pt
splits:
- name: train
num_bytes: 1679341
num_examples: 10018
download_size: 419447
dataset_size: 1679341
- config_name: et-sk
features:
- name: translation
dtype:
translation:
languages:
- et
- sk
splits:
- name: train
num_bytes: 1790786
num_examples: 10022
download_size: 466725
dataset_size: 1790786
- config_name: et-sl
features:
- name: translation
dtype:
translation:
languages:
- et
- sl
splits:
- name: train
num_bytes: 1675833
num_examples: 8896
download_size: 438092
dataset_size: 1675833
- config_name: et-sv
features:
- name: translation
dtype:
translation:
languages:
- et
- sv
splits:
- name: train
num_bytes: 1903846
num_examples: 10193
download_size: 472279
dataset_size: 1903846
- config_name: fi-fr
features:
- name: translation
dtype:
translation:
languages:
- fi
- fr
splits:
- name: train
num_bytes: 2026978
num_examples: 10077
download_size: 478585
dataset_size: 2026978
- config_name: fi-ga
features:
- name: translation
dtype:
translation:
languages:
- fi
- ga
splits:
- name: train
num_bytes: 2087064
num_examples: 10098
download_size: 498821
dataset_size: 2087064
- config_name: fi-hu
features:
- name: translation
dtype:
translation:
languages:
- fi
- hu
splits:
- name: train
num_bytes: 1963941
num_examples: 8606
download_size: 471324
dataset_size: 1963941
- config_name: fi-it
features:
- name: translation
dtype:
translation:
languages:
- fi
- it
splits:
- name: train
num_bytes: 1992667
num_examples: 10048
download_size: 474425
dataset_size: 1992667
- config_name: fi-lt
features:
- name: translation
dtype:
translation:
languages:
- fi
- lt
splits:
- name: train
num_bytes: 1954156
num_examples: 10166
download_size: 484551
dataset_size: 1954156
- config_name: fi-lv
features:
- name: translation
dtype:
translation:
languages:
- fi
- lv
splits:
- name: train
num_bytes: 1944169
num_examples: 10121
download_size: 475122
dataset_size: 1944169
- config_name: fi-mt
features:
- name: translation
dtype:
translation:
languages:
- fi
- mt
splits:
- name: train
num_bytes: 2041035
num_examples: 10097
download_size: 489046
dataset_size: 2041035
- config_name: fi-nl
features:
- name: translation
dtype:
translation:
languages:
- fi
- nl
splits:
- name: train
num_bytes: 2055587
num_examples: 10082
download_size: 490605
dataset_size: 2055587
- config_name: fi-pl
features:
- name: translation
dtype:
translation:
languages:
- fi
- pl
splits:
- name: train
num_bytes: 2043626
num_examples: 10147
download_size: 503252
dataset_size: 2043626
- config_name: fi-pt
features:
- name: translation
dtype:
translation:
languages:
- fi
- pt
splits:
- name: train
num_bytes: 1825183
num_examples: 10098
download_size: 440052
dataset_size: 1825183
- config_name: fi-sk
features:
- name: translation
dtype:
translation:
languages:
- fi
- sk
splits:
- name: train
num_bytes: 1943056
num_examples: 10080
download_size: 489463
dataset_size: 1943056
- config_name: fi-sl
features:
- name: translation
dtype:
translation:
languages:
- fi
- sl
splits:
- name: train
num_bytes: 1784294
num_examples: 8826
download_size: 452938
dataset_size: 1784294
- config_name: fi-sv
features:
- name: translation
dtype:
translation:
languages:
- fi
- sv
splits:
- name: train
num_bytes: 2016902
num_examples: 10143
download_size: 486333
dataset_size: 2016902
- config_name: fr-ga
features:
- name: translation
dtype:
translation:
languages:
- fr
- ga
splits:
- name: train
num_bytes: 2069197
num_examples: 10119
download_size: 484978
dataset_size: 2069197
- config_name: fr-hu
features:
- name: translation
dtype:
translation:
languages:
- fr
- hu
splits:
- name: train
num_bytes: 2024066
num_examples: 8781
download_size: 478017
dataset_size: 2024066
- config_name: fr-it
features:
- name: translation
dtype:
translation:
languages:
- fr
- it
splits:
- name: train
num_bytes: 2103016
num_examples: 10562
download_size: 490312
dataset_size: 2103016
- config_name: fr-lt
features:
- name: translation
dtype:
translation:
languages:
- fr
- lt
splits:
- name: train
num_bytes: 1964759
num_examples: 10346
download_size: 478426
dataset_size: 1964759
- config_name: fr-lv
features:
- name: translation
dtype:
translation:
languages:
- fr
- lv
splits:
- name: train
num_bytes: 1947101
num_examples: 10269
download_size: 466866
dataset_size: 1947101
- config_name: fr-mt
features:
- name: translation
dtype:
translation:
languages:
- fr
- mt
splits:
- name: train
num_bytes: 2069132
num_examples: 10333
download_size: 486513
dataset_size: 2069132
- config_name: fr-nl
features:
- name: translation
dtype:
translation:
languages:
- fr
- nl
splits:
- name: train
num_bytes: 2119922
num_examples: 10363
download_size: 495642
dataset_size: 2119922
- config_name: fr-pl
features:
- name: translation
dtype:
translation:
languages:
- fr
- pl
splits:
- name: train
num_bytes: 2039779
num_examples: 10243
download_size: 494144
dataset_size: 2039779
- config_name: fr-pt
features:
- name: translation
dtype:
translation:
languages:
- fr
- pt
splits:
- name: train
num_bytes: 1839753
num_examples: 10469
download_size: 433277
dataset_size: 1839753
- config_name: fr-sk
features:
- name: translation
dtype:
translation:
languages:
- fr
- sk
splits:
- name: train
num_bytes: 1966993
num_examples: 10352
download_size: 485700
dataset_size: 1966993
- config_name: fr-sl
features:
- name: translation
dtype:
translation:
languages:
- fr
- sl
splits:
- name: train
num_bytes: 1804145
num_examples: 9125
download_size: 449547
dataset_size: 1804145
- config_name: fr-sv
features:
- name: translation
dtype:
translation:
languages:
- fr
- sv
splits:
- name: train
num_bytes: 2002378
num_examples: 10223
download_size: 475110
dataset_size: 2002378
- config_name: ga-hu
features:
- name: translation
dtype:
translation:
languages:
- ga
- hu
splits:
- name: train
num_bytes: 2002194
num_examples: 8581
download_size: 479013
dataset_size: 2002194
- config_name: ga-it
features:
- name: translation
dtype:
translation:
languages:
- ga
- it
splits:
- name: train
num_bytes: 2055494
num_examples: 10052
download_size: 485055
dataset_size: 2055494
- config_name: ga-lt
features:
- name: translation
dtype:
translation:
languages:
- ga
- lt
splits:
- name: train
num_bytes: 2008437
num_examples: 10202
download_size: 492325
dataset_size: 2008437
- config_name: ga-lv
features:
- name: translation
dtype:
translation:
languages:
- ga
- lv
splits:
- name: train
num_bytes: 2030212
num_examples: 10233
download_size: 490537
dataset_size: 2030212
- config_name: ga-mt
features:
- name: translation
dtype:
translation:
languages:
- ga
- mt
splits:
- name: train
num_bytes: 2110440
num_examples: 10192
download_size: 499706
dataset_size: 2110440
- config_name: ga-nl
features:
- name: translation
dtype:
translation:
languages:
- ga
- nl
splits:
- name: train
num_bytes: 2115653
num_examples: 10092
download_size: 499791
dataset_size: 2115653
- config_name: ga-pl
features:
- name: translation
dtype:
translation:
languages:
- ga
- pl
splits:
- name: train
num_bytes: 2097966
num_examples: 10127
download_size: 512564
dataset_size: 2097966
- config_name: ga-pt
features:
- name: translation
dtype:
translation:
languages:
- ga
- pt
splits:
- name: train
num_bytes: 1897633
num_examples: 10228
download_size: 452712
dataset_size: 1897633
- config_name: ga-sk
features:
- name: translation
dtype:
translation:
languages:
- ga
- sk
splits:
- name: train
num_bytes: 2002894
num_examples: 10160
download_size: 498007
dataset_size: 2002894
- config_name: ga-sl
features:
- name: translation
dtype:
translation:
languages:
- ga
- sl
splits:
- name: train
num_bytes: 1826060
num_examples: 8880
download_size: 459764
dataset_size: 1826060
- config_name: ga-sv
features:
- name: translation
dtype:
translation:
languages:
- ga
- sv
splits:
- name: train
num_bytes: 2066669
num_examples: 10141
download_size: 494991
dataset_size: 2066669
- config_name: hu-it
features:
- name: translation
dtype:
translation:
languages:
- hu
- it
splits:
- name: train
num_bytes: 1986234
num_examples: 8743
download_size: 472784
dataset_size: 1986234
- config_name: hu-lt
features:
- name: translation
dtype:
translation:
languages:
- hu
- lt
splits:
- name: train
num_bytes: 1923753
num_examples: 8773
download_size: 475181
dataset_size: 1923753
- config_name: hu-lv
features:
- name: translation
dtype:
translation:
languages:
- hu
- lv
splits:
- name: train
num_bytes: 1894395
num_examples: 8805
download_size: 461543
dataset_size: 1894395
- config_name: hu-mt
features:
- name: translation
dtype:
translation:
languages:
- hu
- mt
splits:
- name: train
num_bytes: 2008555
num_examples: 8746
download_size: 480783
dataset_size: 2008555
- config_name: hu-nl
features:
- name: translation
dtype:
translation:
languages:
- hu
- nl
splits:
- name: train
num_bytes: 2043610
num_examples: 8768
download_size: 486893
dataset_size: 2043610
- config_name: hu-pl
features:
- name: translation
dtype:
translation:
languages:
- hu
- pl
splits:
- name: train
num_bytes: 2000945
num_examples: 8746
download_size: 490835
dataset_size: 2000945
- config_name: hu-pt
features:
- name: translation
dtype:
translation:
languages:
- hu
- pt
splits:
- name: train
num_bytes: 1763582
num_examples: 8671
download_size: 425909
dataset_size: 1763582
- config_name: hu-sk
features:
- name: translation
dtype:
translation:
languages:
- hu
- sk
splits:
- name: train
num_bytes: 1920589
num_examples: 8754
download_size: 480598
dataset_size: 1920589
- config_name: hu-sl
features:
- name: translation
dtype:
translation:
languages:
- hu
- sl
splits:
- name: train
num_bytes: 1931136
num_examples: 8822
download_size: 482086
dataset_size: 1931136
- config_name: hu-sv
features:
- name: translation
dtype:
translation:
languages:
- hu
- sv
splits:
- name: train
num_bytes: 1975308
num_examples: 8737
download_size: 475800
dataset_size: 1975308
- config_name: it-lt
features:
- name: translation
dtype:
translation:
languages:
- it
- lt
splits:
- name: train
num_bytes: 1962002
num_examples: 10310
download_size: 479993
dataset_size: 1962002
- config_name: it-lv
features:
- name: translation
dtype:
translation:
languages:
- it
- lv
splits:
- name: train
num_bytes: 1947096
num_examples: 10228
download_size: 469605
dataset_size: 1947096
- config_name: it-mt
features:
- name: translation
dtype:
translation:
languages:
- it
- mt
splits:
- name: train
num_bytes: 2062132
num_examples: 10284
download_size: 487568
dataset_size: 2062132
- config_name: it-nl
features:
- name: translation
dtype:
translation:
languages:
- it
- nl
splits:
- name: train
num_bytes: 2098018
num_examples: 10354
download_size: 494369
dataset_size: 2098018
- config_name: it-pl
features:
- name: translation
dtype:
translation:
languages:
- it
- pl
splits:
- name: train
num_bytes: 2035132
num_examples: 10225
download_size: 495982
dataset_size: 2035132
- config_name: it-pt
features:
- name: translation
dtype:
translation:
languages:
- it
- pt
splits:
- name: train
num_bytes: 1829009
num_examples: 10249
download_size: 435577
dataset_size: 1829009
- config_name: it-sk
features:
- name: translation
dtype:
translation:
languages:
- it
- sk
splits:
- name: train
num_bytes: 1959852
num_examples: 10322
download_size: 487170
dataset_size: 1959852
- config_name: it-sl
features:
- name: translation
dtype:
translation:
languages:
- it
- sl
splits:
- name: train
num_bytes: 1782313
num_examples: 8916
download_size: 447162
dataset_size: 1782313
- config_name: it-sv
features:
- name: translation
dtype:
translation:
languages:
- it
- sv
splits:
- name: train
num_bytes: 2007053
num_examples: 10226
download_size: 479168
dataset_size: 2007053
- config_name: lt-lv
features:
- name: translation
dtype:
translation:
languages:
- lt
- lv
splits:
- name: train
num_bytes: 1887991
num_examples: 10355
download_size: 475323
dataset_size: 1887991
- config_name: lt-mt
features:
- name: translation
dtype:
translation:
languages:
- lt
- mt
splits:
- name: train
num_bytes: 2004370
num_examples: 10407
download_size: 493694
dataset_size: 2004370
- config_name: lt-nl
features:
- name: translation
dtype:
translation:
languages:
- lt
- nl
splits:
- name: train
num_bytes: 2010329
num_examples: 10309
download_size: 493675
dataset_size: 2010329
- config_name: lt-pl
features:
- name: translation
dtype:
translation:
languages:
- lt
- pl
splits:
- name: train
num_bytes: 1962628
num_examples: 10255
download_size: 498073
dataset_size: 1962628
- config_name: lt-pt
features:
- name: translation
dtype:
translation:
languages:
- lt
- pt
splits:
- name: train
num_bytes: 1750721
num_examples: 10260
download_size: 435764
dataset_size: 1750721
- config_name: lt-sk
features:
- name: translation
dtype:
translation:
languages:
- lt
- sk
splits:
- name: train
num_bytes: 1896763
num_examples: 10395
download_size: 492051
dataset_size: 1896763
- config_name: lt-sl
features:
- name: translation
dtype:
translation:
languages:
- lt
- sl
splits:
- name: train
num_bytes: 1710645
num_examples: 8912
download_size: 447984
dataset_size: 1710645
- config_name: lt-sv
features:
- name: translation
dtype:
translation:
languages:
- lt
- sv
splits:
- name: train
num_bytes: 1928035
num_examples: 10208
download_size: 480136
dataset_size: 1928035
- config_name: lv-mt
features:
- name: translation
dtype:
translation:
languages:
- lv
- mt
splits:
- name: train
num_bytes: 1971568
num_examples: 10231
download_size: 477968
dataset_size: 1971568
- config_name: lv-nl
features:
- name: translation
dtype:
translation:
languages:
- lv
- nl
splits:
- name: train
num_bytes: 1981779
num_examples: 10160
download_size: 478862
dataset_size: 1981779
- config_name: lv-pl
features:
- name: translation
dtype:
translation:
languages:
- lv
- pl
splits:
- name: train
num_bytes: 1933717
num_examples: 10106
download_size: 483176
dataset_size: 1933717
- config_name: lv-pt
features:
- name: translation
dtype:
translation:
languages:
- lv
- pt
splits:
- name: train
num_bytes: 1739250
num_examples: 10257
download_size: 425977
dataset_size: 1739250
- config_name: lv-sk
features:
- name: translation
dtype:
translation:
languages:
- lv
- sk
splits:
- name: train
num_bytes: 1866635
num_examples: 10234
download_size: 476961
dataset_size: 1866635
- config_name: lv-sl
features:
- name: translation
dtype:
translation:
languages:
- lv
- sl
splits:
- name: train
num_bytes: 1706716
num_examples: 8939
download_size: 440111
dataset_size: 1706716
- config_name: lv-sv
features:
- name: translation
dtype:
translation:
languages:
- lv
- sv
splits:
- name: train
num_bytes: 1903483
num_examples: 10083
download_size: 465968
dataset_size: 1903483
- config_name: mt-nl
features:
- name: translation
dtype:
translation:
languages:
- mt
- nl
splits:
- name: train
num_bytes: 2113179
num_examples: 10281
download_size: 501063
dataset_size: 2113179
- config_name: mt-pl
features:
- name: translation
dtype:
translation:
languages:
- mt
- pl
splits:
- name: train
num_bytes: 2068098
num_examples: 10232
download_size: 506849
dataset_size: 2068098
- config_name: mt-pt
features:
- name: translation
dtype:
translation:
languages:
- mt
- pt
splits:
- name: train
num_bytes: 1842914
num_examples: 10278
download_size: 441801
dataset_size: 1842914
- config_name: mt-sk
features:
- name: translation
dtype:
translation:
languages:
- mt
- sk
splits:
- name: train
num_bytes: 1997346
num_examples: 10344
download_size: 499013
dataset_size: 1997346
- config_name: mt-sl
features:
- name: translation
dtype:
translation:
languages:
- mt
- sl
splits:
- name: train
num_bytes: 1795035
num_examples: 8892
download_size: 453508
dataset_size: 1795035
- config_name: mt-sv
features:
- name: translation
dtype:
translation:
languages:
- mt
- sv
splits:
- name: train
num_bytes: 2031253
num_examples: 10211
download_size: 487757
dataset_size: 2031253
- config_name: nl-pl
features:
- name: translation
dtype:
translation:
languages:
- nl
- pl
splits:
- name: train
num_bytes: 2090797
num_examples: 10244
download_size: 510559
dataset_size: 2090797
- config_name: nl-pt
features:
- name: translation
dtype:
translation:
languages:
- nl
- pt
splits:
- name: train
num_bytes: 1838423
num_examples: 10080
download_size: 438938
dataset_size: 1838423
- config_name: nl-sk
features:
- name: translation
dtype:
translation:
languages:
- nl
- sk
splits:
- name: train
num_bytes: 2018775
num_examples: 10333
download_size: 502418
dataset_size: 2018775
- config_name: nl-sl
features:
- name: translation
dtype:
translation:
languages:
- nl
- sl
splits:
- name: train
num_bytes: 1831798
num_examples: 8969
download_size: 460139
dataset_size: 1831798
- config_name: nl-sv
features:
- name: translation
dtype:
translation:
languages:
- nl
- sv
splits:
- name: train
num_bytes: 2061265
num_examples: 10232
download_size: 492864
dataset_size: 2061265
- config_name: pl-pt
features:
- name: translation
dtype:
translation:
languages:
- pl
- pt
splits:
- name: train
num_bytes: 1825022
num_examples: 10157
download_size: 451029
dataset_size: 1825022
- config_name: pl-sk
features:
- name: translation
dtype:
translation:
languages:
- pl
- sk
splits:
- name: train
num_bytes: 1974150
num_examples: 10335
download_size: 507836
dataset_size: 1974150
- config_name: pl-sl
features:
- name: translation
dtype:
translation:
languages:
- pl
- sl
splits:
- name: train
num_bytes: 1781021
num_examples: 8819
download_size: 462806
dataset_size: 1781021
- config_name: pl-sv
features:
- name: translation
dtype:
translation:
languages:
- pl
- sv
splits:
- name: train
num_bytes: 2016878
num_examples: 10147
download_size: 498039
dataset_size: 2016878
- config_name: pt-sk
features:
- name: translation
dtype:
translation:
languages:
- pt
- sk
splits:
- name: train
num_bytes: 1782257
num_examples: 10597
download_size: 449103
dataset_size: 1782257
- config_name: pt-sl
features:
- name: translation
dtype:
translation:
languages:
- pt
- sl
splits:
- name: train
num_bytes: 1557351
num_examples: 8988
download_size: 399971
dataset_size: 1557351
- config_name: pt-sv
features:
- name: translation
dtype:
translation:
languages:
- pt
- sv
splits:
- name: train
num_bytes: 1760642
num_examples: 10026
download_size: 427317
dataset_size: 1760642
- config_name: sk-sl
features:
- name: translation
dtype:
translation:
languages:
- sk
- sl
splits:
- name: train
num_bytes: 1712590
num_examples: 9051
download_size: 454375
dataset_size: 1712590
- config_name: sk-sv
features:
- name: translation
dtype:
translation:
languages:
- sk
- sv
splits:
- name: train
num_bytes: 1937086
num_examples: 10253
download_size: 488924
dataset_size: 1937086
- config_name: sl-sv
features:
- name: translation
dtype:
translation:
languages:
- sl
- sv
splits:
- name: train
num_bytes: 1750298
num_examples: 8816
download_size: 446016
dataset_size: 1750298
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[sardware](http://opus.nlpl.eu/EUconst.php)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
A parallel corpus collected from the European Constitution.
21 languages, 210 bitexts
### Supported Tasks and Leaderboards
The underlying task is machine translation.
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
### Contributions
Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset. |
opus_finlex | ---
annotations_creators:
- found
language_creators:
- found
language:
- fi
- sv
license:
- unknown
multilinguality:
- translation
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: OpusFinlex
dataset_info:
features:
- name: translation
dtype:
translation:
languages:
- fi
- sv
config_name: fi-sv
splits:
- name: train
num_bytes: 610550215
num_examples: 3114141
download_size: 153886554
dataset_size: 610550215
---
# Dataset Card for [opus_finlex]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[Finlex](http://opus.nlpl.eu/Finlex.php)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The Finlex Data Base is a comprehensive collection of legislative and other judicial information of Finland, which is available in Finnish, Swedish and partially in English. This corpus is taken from the Semantic Finlex serice that provides the Finnish and Swedish data as linked open data and also raw XML files.
### Supported Tasks and Leaderboards
The underlying task is machine translation for language pair Swedish and Finnish.
### Languages
Swedish and Finnish
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
### Contributions
Thanks to [@spatil6](https://github.com/spatil6) for adding this dataset. |
opus_fiskmo | ---
annotations_creators:
- found
language_creators:
- found
language:
- fi
- sv
license:
- unknown
multilinguality:
- translation
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: OpusFiskmo
dataset_info:
features:
- name: translation
dtype:
translation:
languages:
- fi
- sv
config_name: fi-sv
splits:
- name: train
num_bytes: 326528834
num_examples: 2100001
download_size: 144858927
dataset_size: 326528834
---
# Dataset Card for [opus_fiskmo]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[fiskmo](http://opus.nlpl.eu/fiskmo.php)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
fiskmo, a massive parallel corpus for Finnish and Swedish.
### Supported Tasks and Leaderboards
The underlying task is machine translation for language pair Finnish and Swedish.
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
### Contributions
Thanks to [@spatil6](https://github.com/spatil6) for adding this dataset. |
opus_gnome | ---
annotations_creators:
- found
language_creators:
- found
language:
- af
- am
- an
- ang
- ar
- as
- ast
- az
- bal
- be
- bem
- bg
- bn
- bo
- br
- brx
- bs
- ca
- crh
- cs
- csb
- cy
- da
- de
- dv
- dz
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fo
- fr
- fur
- fy
- ga
- gd
- gl
- gn
- gu
- gv
- ha
- he
- hi
- hr
- hu
- hy
- ia
- id
- ig
- io
- is
- it
- ja
- jbo
- ka
- kg
- kk
- km
- kn
- ko
- kr
- ks
- ku
- ky
- la
- lg
- li
- lo
- lt
- lv
- mai
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- mus
- my
- nb
- nds
- ne
- nhn
- nl
- nn
- 'no'
- nqo
- nr
- nso
- oc
- or
- os
- pa
- pl
- ps
- pt
- quz
- ro
- ru
- rw
- si
- sk
- sl
- so
- sq
- sr
- st
- sv
- sw
- szl
- ta
- te
- tg
- th
- tk
- tl
- tr
- ts
- tt
- tyj
- ug
- uk
- ur
- uz
- vi
- wa
- xh
- yi
- yo
- zh
- zu
language_bcp47:
- ar-TN
- az-IR
- bg-BG
- bn-IN
- da-DK
- de-CH
- en-AU
- en-CA
- en-GB
- en-NZ
- en-US
- en-ZA
- es-AR
- es-CL
- es-CO
- es-CR
- es-DO
- es-EC
- es-ES
- es-GT
- es-HN
- es-MX
- es-NI
- es-PA
- es-PE
- es-PR
- es-SV
- es-UY
- es-VE
- fa-IR
- hi-IN
- it-IT
- ms-MY
- nb-NO
- nn-NO
- no-NB
- pt-BR
- pt-PT
- sr-ME
- tg-TJ
- tl-PH
- tr-TR
- ur-PK
- vi-VN
- zh-CN
- zh-HK
- zh-TW
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
- 1K<n<10K
- n<1K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: OpusGnome
configs:
- ar-bal
- bg-csb
- ca-en_GB
- cs-eo
- cs-tk
- da-vi
- de-ha
- de-tt
- el-sk
- en_GB-my
dataset_info:
- config_name: ar-bal
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ar
- bal
splits:
- name: train
num_bytes: 5150
num_examples: 60
download_size: 2503
dataset_size: 5150
- config_name: bg-csb
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- bg
- csb
splits:
- name: train
num_bytes: 172545
num_examples: 1768
download_size: 29706
dataset_size: 172545
- config_name: ca-en_GB
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ca
- en_GB
splits:
- name: train
num_bytes: 1007488
num_examples: 7982
download_size: 188727
dataset_size: 1007488
- config_name: cs-eo
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- cs
- eo
splits:
- name: train
num_bytes: 2895
num_examples: 73
download_size: 3055
dataset_size: 2895
- config_name: de-ha
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- ha
splits:
- name: train
num_bytes: 22899
num_examples: 216
download_size: 5287
dataset_size: 22899
- config_name: cs-tk
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- cs
- tk
splits:
- name: train
num_bytes: 1197731
num_examples: 18686
download_size: 98044
dataset_size: 1197731
- config_name: da-vi
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- da
- vi
splits:
- name: train
num_bytes: 9372
num_examples: 149
download_size: 5432
dataset_size: 9372
- config_name: en_GB-my
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en_GB
- my
splits:
- name: train
num_bytes: 3298074
num_examples: 28232
download_size: 362750
dataset_size: 3298074
- config_name: el-sk
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- el
- sk
splits:
- name: train
num_bytes: 12121
num_examples: 150
download_size: 6116
dataset_size: 12121
- config_name: de-tt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- tt
splits:
- name: train
num_bytes: 134978
num_examples: 2169
download_size: 15891
dataset_size: 134978
---
# Dataset Card for Opus Gnome
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/GNOME.php
- **Repository:** None
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.
You can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/GNOME.php
E.g.
`dataset = load_dataset("opus_gnome", lang1="it", lang2="pl")`
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
```
{
'id': '0',
'translation': {
'ar': 'إعداد سياسة القفل',
'bal': 'تنظیم کتن سیاست کبل'
}
}
```
### Data Fields
Each instance has two fields:
- **id**: the id of the example
- **translation**: a dictionary containing translated texts in two languages.
### Data Splits
Each subset simply consists in a train set. We provide the number of examples for certain language pairs:
| | train |
|:---------|--------:|
| ar-bal | 60 |
| bg-csb | 10 |
| ca-en_GB | 7982 |
| cs-eo | 73 |
| de-ha | 216 |
| cs-tk | 18686 |
| da-vi | 149 |
| en_GB-my | 28232 |
| el-sk | 150 |
| de-tt | 2169 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
@InProceedings{TIEDEMANN12.463,
author = {J{\"o}rg Tiedemann},
title = {Parallel Data, Tools and Interfaces in OPUS},
booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
year = {2012},
month = {may},
date = {23-25},
address = {Istanbul, Turkey},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
isbn = {978-2-9517408-7-7},
language = {english}
}
### Contributions
Thanks to [@rkc007](https://github.com/rkc007) for adding this dataset. |
opus_infopankki | ---
annotations_creators:
- found
language_creators:
- found
language:
- ar
- en
- es
- et
- fa
- fi
- fr
- ru
- so
- sv
- tr
- zh
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: OpusInfopankki
configs:
- ar-en
- ar-es
- ar-et
- ar-fa
- ar-fi
- ar-fr
- ar-ru
- ar-so
- ar-sv
- ar-tr
- ar-zh
- en-es
- en-et
- en-fa
- en-fi
- en-fr
- en-ru
- en-so
- en-sv
- en-tr
- en-zh
- es-et
- es-fa
- es-fi
- es-fr
- es-ru
- es-so
- es-sv
- es-tr
- es-zh
- et-fa
- et-fi
- et-fr
- et-ru
- et-so
- et-sv
- et-tr
- et-zh
- fa-fi
- fa-fr
- fa-ru
- fa-so
- fa-sv
- fa-tr
- fa-zh
- fi-fr
- fi-ru
- fi-so
- fi-sv
- fi-tr
- fi-zh
- fr-ru
- fr-so
- fr-sv
- fr-tr
- fr-zh
- ru-so
- ru-sv
- ru-tr
- ru-zh
- so-sv
- so-tr
- so-zh
- sv-tr
- sv-zh
- tr-zh
dataset_info:
- config_name: ar-en
features:
- name: translation
dtype:
translation:
languages:
- ar
- en
splits:
- name: train
num_bytes: 10133385
num_examples: 50769
download_size: 1675642
dataset_size: 10133385
- config_name: ar-es
features:
- name: translation
dtype:
translation:
languages:
- ar
- es
splits:
- name: train
num_bytes: 8665395
num_examples: 40514
download_size: 1481047
dataset_size: 8665395
- config_name: ar-et
features:
- name: translation
dtype:
translation:
languages:
- ar
- et
splits:
- name: train
num_bytes: 9087595
num_examples: 46573
download_size: 1526418
dataset_size: 9087595
- config_name: ar-fa
features:
- name: translation
dtype:
translation:
languages:
- ar
- fa
splits:
- name: train
num_bytes: 12220236
num_examples: 47007
download_size: 1817143
dataset_size: 12220236
- config_name: ar-fi
features:
- name: translation
dtype:
translation:
languages:
- ar
- fi
splits:
- name: train
num_bytes: 9524305
num_examples: 49608
download_size: 1599735
dataset_size: 9524305
- config_name: ar-fr
features:
- name: translation
dtype:
translation:
languages:
- ar
- fr
splits:
- name: train
num_bytes: 8877669
num_examples: 41061
download_size: 1516374
dataset_size: 8877669
- config_name: ar-ru
features:
- name: translation
dtype:
translation:
languages:
- ar
- ru
splits:
- name: train
num_bytes: 13648242
num_examples: 50286
download_size: 1970843
dataset_size: 13648242
- config_name: ar-so
features:
- name: translation
dtype:
translation:
languages:
- ar
- so
splits:
- name: train
num_bytes: 9555588
num_examples: 44736
download_size: 1630676
dataset_size: 9555588
- config_name: ar-sv
features:
- name: translation
dtype:
translation:
languages:
- ar
- sv
splits:
- name: train
num_bytes: 8585175
num_examples: 43085
download_size: 1469533
dataset_size: 8585175
- config_name: ar-tr
features:
- name: translation
dtype:
translation:
languages:
- ar
- tr
splits:
- name: train
num_bytes: 8691117
num_examples: 41710
download_size: 1481787
dataset_size: 8691117
- config_name: ar-zh
features:
- name: translation
dtype:
translation:
languages:
- ar
- zh
splits:
- name: train
num_bytes: 5973658
num_examples: 29943
download_size: 1084404
dataset_size: 5973658
- config_name: en-es
features:
- name: translation
dtype:
translation:
languages:
- en
- es
splits:
- name: train
num_bytes: 6934023
num_examples: 42657
download_size: 1333020
dataset_size: 6934023
- config_name: en-et
features:
- name: translation
dtype:
translation:
languages:
- en
- et
splits:
- name: train
num_bytes: 8211610
num_examples: 58410
download_size: 1509893
dataset_size: 8211610
- config_name: en-fa
features:
- name: translation
dtype:
translation:
languages:
- en
- fa
splits:
- name: train
num_bytes: 10166345
num_examples: 48277
download_size: 1657826
dataset_size: 10166345
- config_name: en-fi
features:
- name: translation
dtype:
translation:
languages:
- en
- fi
splits:
- name: train
num_bytes: 10913673
num_examples: 84645
download_size: 1860908
dataset_size: 10913673
- config_name: en-fr
features:
- name: translation
dtype:
translation:
languages:
- en
- fr
splits:
- name: train
num_bytes: 8903231
num_examples: 56120
download_size: 1572554
dataset_size: 8903231
- config_name: en-ru
features:
- name: translation
dtype:
translation:
languages:
- en
- ru
splits:
- name: train
num_bytes: 15918259
num_examples: 75305
download_size: 2220544
dataset_size: 15918259
- config_name: en-so
features:
- name: translation
dtype:
translation:
languages:
- en
- so
splits:
- name: train
num_bytes: 7602330
num_examples: 47220
download_size: 1467156
dataset_size: 7602330
- config_name: en-sv
features:
- name: translation
dtype:
translation:
languages:
- en
- sv
splits:
- name: train
num_bytes: 7411023
num_examples: 51749
download_size: 1384139
dataset_size: 7411023
- config_name: en-tr
features:
- name: translation
dtype:
translation:
languages:
- en
- tr
splits:
- name: train
num_bytes: 6929194
num_examples: 44030
download_size: 1329853
dataset_size: 6929194
- config_name: en-zh
features:
- name: translation
dtype:
translation:
languages:
- en
- zh
splits:
- name: train
num_bytes: 4666987
num_examples: 29907
download_size: 894750
dataset_size: 4666987
- config_name: es-et
features:
- name: translation
dtype:
translation:
languages:
- es
- et
splits:
- name: train
num_bytes: 6611996
num_examples: 42342
download_size: 1301067
dataset_size: 6611996
- config_name: es-fa
features:
- name: translation
dtype:
translation:
languages:
- es
- fa
splits:
- name: train
num_bytes: 9338250
num_examples: 41218
download_size: 1558933
dataset_size: 9338250
- config_name: es-fi
features:
- name: translation
dtype:
translation:
languages:
- es
- fi
splits:
- name: train
num_bytes: 6436338
num_examples: 41479
download_size: 1253298
dataset_size: 6436338
- config_name: es-fr
features:
- name: translation
dtype:
translation:
languages:
- es
- fr
splits:
- name: train
num_bytes: 7368764
num_examples: 41940
download_size: 1406167
dataset_size: 7368764
- config_name: es-ru
features:
- name: translation
dtype:
translation:
languages:
- es
- ru
splits:
- name: train
num_bytes: 9844977
num_examples: 41061
download_size: 1595928
dataset_size: 9844977
- config_name: es-so
features:
- name: translation
dtype:
translation:
languages:
- es
- so
splits:
- name: train
num_bytes: 7257078
num_examples: 41752
download_size: 1438303
dataset_size: 7257078
- config_name: es-sv
features:
- name: translation
dtype:
translation:
languages:
- es
- sv
splits:
- name: train
num_bytes: 6650692
num_examples: 41256
download_size: 1291291
dataset_size: 6650692
- config_name: es-tr
features:
- name: translation
dtype:
translation:
languages:
- es
- tr
splits:
- name: train
num_bytes: 7144105
num_examples: 42191
download_size: 1372312
dataset_size: 7144105
- config_name: es-zh
features:
- name: translation
dtype:
translation:
languages:
- es
- zh
splits:
- name: train
num_bytes: 4358775
num_examples: 26004
download_size: 810902
dataset_size: 4358775
- config_name: et-fa
features:
- name: translation
dtype:
translation:
languages:
- et
- fa
splits:
- name: train
num_bytes: 9796036
num_examples: 47633
download_size: 1603405
dataset_size: 9796036
- config_name: et-fi
features:
- name: translation
dtype:
translation:
languages:
- et
- fi
splits:
- name: train
num_bytes: 7657037
num_examples: 57353
download_size: 1425641
dataset_size: 7657037
- config_name: et-fr
features:
- name: translation
dtype:
translation:
languages:
- et
- fr
splits:
- name: train
num_bytes: 7012470
num_examples: 44753
download_size: 1355458
dataset_size: 7012470
- config_name: et-ru
features:
- name: translation
dtype:
translation:
languages:
- et
- ru
splits:
- name: train
num_bytes: 12001439
num_examples: 55901
download_size: 1812764
dataset_size: 12001439
- config_name: et-so
features:
- name: translation
dtype:
translation:
languages:
- et
- so
splits:
- name: train
num_bytes: 7260837
num_examples: 46933
download_size: 1432147
dataset_size: 7260837
- config_name: et-sv
features:
- name: translation
dtype:
translation:
languages:
- et
- sv
splits:
- name: train
num_bytes: 6523081
num_examples: 46775
download_size: 1268616
dataset_size: 6523081
- config_name: et-tr
features:
- name: translation
dtype:
translation:
languages:
- et
- tr
splits:
- name: train
num_bytes: 6621705
num_examples: 43729
download_size: 1299911
dataset_size: 6621705
- config_name: et-zh
features:
- name: translation
dtype:
translation:
languages:
- et
- zh
splits:
- name: train
num_bytes: 4305297
num_examples: 27826
download_size: 808812
dataset_size: 4305297
- config_name: fa-fi
features:
- name: translation
dtype:
translation:
languages:
- fa
- fi
splits:
- name: train
num_bytes: 9579297
num_examples: 46924
download_size: 1574886
dataset_size: 9579297
- config_name: fa-fr
features:
- name: translation
dtype:
translation:
languages:
- fa
- fr
splits:
- name: train
num_bytes: 9574294
num_examples: 41975
download_size: 1591112
dataset_size: 9574294
- config_name: fa-ru
features:
- name: translation
dtype:
translation:
languages:
- fa
- ru
splits:
- name: train
num_bytes: 13544491
num_examples: 47814
download_size: 1947217
dataset_size: 13544491
- config_name: fa-so
features:
- name: translation
dtype:
translation:
languages:
- fa
- so
splits:
- name: train
num_bytes: 10254763
num_examples: 45571
download_size: 1722085
dataset_size: 10254763
- config_name: fa-sv
features:
- name: translation
dtype:
translation:
languages:
- fa
- sv
splits:
- name: train
num_bytes: 9153792
num_examples: 43510
download_size: 1519092
dataset_size: 9153792
- config_name: fa-tr
features:
- name: translation
dtype:
translation:
languages:
- fa
- tr
splits:
- name: train
num_bytes: 9393249
num_examples: 42708
download_size: 1559312
dataset_size: 9393249
- config_name: fa-zh
features:
- name: translation
dtype:
translation:
languages:
- fa
- zh
splits:
- name: train
num_bytes: 5792463
num_examples: 27748
download_size: 1027887
dataset_size: 5792463
- config_name: fi-fr
features:
- name: translation
dtype:
translation:
languages:
- fi
- fr
splits:
- name: train
num_bytes: 8310899
num_examples: 55087
download_size: 1488763
dataset_size: 8310899
- config_name: fi-ru
features:
- name: translation
dtype:
translation:
languages:
- fi
- ru
splits:
- name: train
num_bytes: 15188232
num_examples: 74699
download_size: 2142712
dataset_size: 15188232
- config_name: fi-so
features:
- name: translation
dtype:
translation:
languages:
- fi
- so
splits:
- name: train
num_bytes: 7076261
num_examples: 46032
download_size: 1387424
dataset_size: 7076261
- config_name: fi-sv
features:
- name: translation
dtype:
translation:
languages:
- fi
- sv
splits:
- name: train
num_bytes: 6947272
num_examples: 51506
download_size: 1312272
dataset_size: 6947272
- config_name: fi-tr
features:
- name: translation
dtype:
translation:
languages:
- fi
- tr
splits:
- name: train
num_bytes: 6438756
num_examples: 42781
download_size: 1251294
dataset_size: 6438756
- config_name: fi-zh
features:
- name: translation
dtype:
translation:
languages:
- fi
- zh
splits:
- name: train
num_bytes: 4434192
num_examples: 29503
download_size: 864043
dataset_size: 4434192
- config_name: fr-ru
features:
- name: translation
dtype:
translation:
languages:
- fr
- ru
splits:
- name: train
num_bytes: 12564244
num_examples: 54213
download_size: 1862751
dataset_size: 12564244
- config_name: fr-so
features:
- name: translation
dtype:
translation:
languages:
- fr
- so
splits:
- name: train
num_bytes: 7473599
num_examples: 42652
download_size: 1471709
dataset_size: 7473599
- config_name: fr-sv
features:
- name: translation
dtype:
translation:
languages:
- fr
- sv
splits:
- name: train
num_bytes: 7027603
num_examples: 43524
download_size: 1343061
dataset_size: 7027603
- config_name: fr-tr
features:
- name: translation
dtype:
translation:
languages:
- fr
- tr
splits:
- name: train
num_bytes: 7341118
num_examples: 43036
download_size: 1399175
dataset_size: 7341118
- config_name: fr-zh
features:
- name: translation
dtype:
translation:
languages:
- fr
- zh
splits:
- name: train
num_bytes: 4525133
num_examples: 26654
download_size: 850456
dataset_size: 4525133
- config_name: ru-so
features:
- name: translation
dtype:
translation:
languages:
- ru
- so
splits:
- name: train
num_bytes: 10809233
num_examples: 45430
download_size: 1742599
dataset_size: 10809233
- config_name: ru-sv
features:
- name: translation
dtype:
translation:
languages:
- ru
- sv
splits:
- name: train
num_bytes: 10517473
num_examples: 47672
download_size: 1634682
dataset_size: 10517473
- config_name: ru-tr
features:
- name: translation
dtype:
translation:
languages:
- ru
- tr
splits:
- name: train
num_bytes: 9930632
num_examples: 42587
download_size: 1591805
dataset_size: 9930632
- config_name: ru-zh
features:
- name: translation
dtype:
translation:
languages:
- ru
- zh
splits:
- name: train
num_bytes: 6417832
num_examples: 29523
download_size: 1109274
dataset_size: 6417832
- config_name: so-sv
features:
- name: translation
dtype:
translation:
languages:
- so
- sv
splits:
- name: train
num_bytes: 6763794
num_examples: 42384
download_size: 1353892
dataset_size: 6763794
- config_name: so-tr
features:
- name: translation
dtype:
translation:
languages:
- so
- tr
splits:
- name: train
num_bytes: 7272389
num_examples: 43242
download_size: 1440287
dataset_size: 7272389
- config_name: so-zh
features:
- name: translation
dtype:
translation:
languages:
- so
- zh
splits:
- name: train
num_bytes: 4535979
num_examples: 27090
download_size: 859149
dataset_size: 4535979
- config_name: sv-tr
features:
- name: translation
dtype:
translation:
languages:
- sv
- tr
splits:
- name: train
num_bytes: 6637784
num_examples: 42555
download_size: 1288209
dataset_size: 6637784
- config_name: sv-zh
features:
- name: translation
dtype:
translation:
languages:
- sv
- zh
splits:
- name: train
num_bytes: 4216429
num_examples: 26898
download_size: 779012
dataset_size: 4216429
- config_name: tr-zh
features:
- name: translation
dtype:
translation:
languages:
- tr
- zh
splits:
- name: train
num_bytes: 4494095
num_examples: 27323
download_size: 841988
dataset_size: 4494095
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[infopankki](http://opus.nlpl.eu/infopankki-v1.php)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
A parallel corpus of 12 languages, 66 bitexts.
### Supported Tasks and Leaderboards
The underlying task is machine translation.
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@InProceedings{TIEDEMANN12.463,
author = {J�rg Tiedemann},
title = {Parallel Data, Tools and Interfaces in OPUS},
booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
year = {2012},
month = {may},
date = {23-25},
address = {Istanbul, Turkey},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
isbn = {978-2-9517408-7-7},
language = {english}
}
```
### Contributions
Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset. |
opus_memat | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
- xh
license:
- unknown
multilinguality:
- translation
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: OpusMemat
dataset_info:
features:
- name: translation
dtype:
translation:
languages:
- xh
- en
config_name: xh-en
splits:
- name: train
num_bytes: 25400570
num_examples: 154764
download_size: 8382865
dataset_size: 25400570
---
# Dataset Card for [opus_memat]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[memat](http://opus.nlpl.eu/memat.php)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Xhosa-English parallel corpora, funded by EPSRC, the Medical Machine Translation project worked on machine translation between ixiXhosa and English, with a focus on the medical domain.
### Supported Tasks and Leaderboards
The underlying task is machine translation from Xhosa to English
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
### Contributions
Thanks to [@spatil6](https://github.com/spatil6) for adding this dataset. |
opus_montenegrinsubs | ---
annotations_creators:
- found
language_creators:
- found
language:
- cnr
- en
license:
- unknown
multilinguality:
- translation
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: OpusMontenegrinsubs
dataset_info:
features:
- name: translation
dtype:
translation:
languages:
- en
- me
config_name: en-me
splits:
- name: train
num_bytes: 4896403
num_examples: 65043
download_size: 1990570
dataset_size: 4896403
---
# Dataset Card for [opus_montenegrinsubs]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[opus MontenegrinSubs ](http://opus.nlpl.eu/MontenegrinSubs.php)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Opus MontenegrinSubs dataset for machine translation task, for language pair en-me: english and montenegrin
### Supported Tasks and Leaderboards
The underlying task is machine translation from en to me
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
### Contributions
Thanks to [@spatil6](https://github.com/spatil6) for adding this dataset. |
opus_openoffice | ---
annotations_creators:
- found
language_creators:
- found
language:
- de
- en
- es
- fr
- ja
- ru
- sv
- zh
language_bcp47:
- en-GB
- zh-CN
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: OpusOpenoffice
configs:
- de-en_GB
- de-es
- de-fr
- de-ja
- de-ru
- de-sv
- de-zh_CN
- en_GB-es
- en_GB-fr
- en_GB-ja
- en_GB-ru
- en_GB-sv
- en_GB-zh_CN
- es-fr
- es-ja
- es-ru
- es-sv
- es-zh_CN
- fr-ja
- fr-ru
- fr-sv
- fr-zh_CN
- ja-ru
- ja-sv
- ja-zh_CN
- ru-sv
- ru-zh_CN
- sv-zh_CN
dataset_info:
- config_name: de-en_GB
features:
- name: translation
dtype:
translation:
languages:
- de
- en_GB
splits:
- name: train
num_bytes: 6201141
num_examples: 77052
download_size: 2030226
dataset_size: 6201141
- config_name: de-es
features:
- name: translation
dtype:
translation:
languages:
- de
- es
splits:
- name: train
num_bytes: 6571679
num_examples: 77000
download_size: 2100214
dataset_size: 6571679
- config_name: de-fr
features:
- name: translation
dtype:
translation:
languages:
- de
- fr
splits:
- name: train
num_bytes: 6715869
num_examples: 76684
download_size: 2111078
dataset_size: 6715869
- config_name: de-ja
features:
- name: translation
dtype:
translation:
languages:
- de
- ja
splits:
- name: train
num_bytes: 7085007
num_examples: 69396
download_size: 2112771
dataset_size: 7085007
- config_name: de-ru
features:
- name: translation
dtype:
translation:
languages:
- de
- ru
splits:
- name: train
num_bytes: 8333305
num_examples: 75511
download_size: 2267499
dataset_size: 8333305
- config_name: de-sv
features:
- name: translation
dtype:
translation:
languages:
- de
- sv
splits:
- name: train
num_bytes: 6289026
num_examples: 77366
download_size: 2056115
dataset_size: 6289026
- config_name: de-zh_CN
features:
- name: translation
dtype:
translation:
languages:
- de
- zh_CN
splits:
- name: train
num_bytes: 5836684
num_examples: 68712
download_size: 2006818
dataset_size: 5836684
- config_name: en_GB-es
features:
- name: translation
dtype:
translation:
languages:
- en_GB
- es
splits:
- name: train
num_bytes: 6147645
num_examples: 77646
download_size: 1978922
dataset_size: 6147645
- config_name: en_GB-fr
features:
- name: translation
dtype:
translation:
languages:
- en_GB
- fr
splits:
- name: train
num_bytes: 6297843
num_examples: 77696
download_size: 1987317
dataset_size: 6297843
- config_name: en_GB-ja
features:
- name: translation
dtype:
translation:
languages:
- en_GB
- ja
splits:
- name: train
num_bytes: 6636778
num_examples: 69149
download_size: 1987255
dataset_size: 6636778
- config_name: en_GB-ru
features:
- name: translation
dtype:
translation:
languages:
- en_GB
- ru
splits:
- name: train
num_bytes: 7878034
num_examples: 75401
download_size: 2137510
dataset_size: 7878034
- config_name: en_GB-sv
features:
- name: translation
dtype:
translation:
languages:
- en_GB
- sv
splits:
- name: train
num_bytes: 5861525
num_examples: 77815
download_size: 1934619
dataset_size: 5861525
- config_name: en_GB-zh_CN
features:
- name: translation
dtype:
translation:
languages:
- en_GB
- zh_CN
splits:
- name: train
num_bytes: 5424921
num_examples: 69400
download_size: 1887600
dataset_size: 5424921
- config_name: es-fr
features:
- name: translation
dtype:
translation:
languages:
- es
- fr
splits:
- name: train
num_bytes: 6663156
num_examples: 77417
download_size: 2059241
dataset_size: 6663156
- config_name: es-ja
features:
- name: translation
dtype:
translation:
languages:
- es
- ja
splits:
- name: train
num_bytes: 7005179
num_examples: 68944
download_size: 2059072
dataset_size: 7005179
- config_name: es-ru
features:
- name: translation
dtype:
translation:
languages:
- es
- ru
splits:
- name: train
num_bytes: 8283767
num_examples: 76461
download_size: 2214447
dataset_size: 8283767
- config_name: es-sv
features:
- name: translation
dtype:
translation:
languages:
- es
- sv
splits:
- name: train
num_bytes: 6232530
num_examples: 77825
download_size: 2002804
dataset_size: 6232530
- config_name: es-zh_CN
features:
- name: translation
dtype:
translation:
languages:
- es
- zh_CN
splits:
- name: train
num_bytes: 5776883
num_examples: 68583
download_size: 1958411
dataset_size: 5776883
- config_name: fr-ja
features:
- name: translation
dtype:
translation:
languages:
- fr
- ja
splits:
- name: train
num_bytes: 7160388
num_examples: 69026
download_size: 2069621
dataset_size: 7160388
- config_name: fr-ru
features:
- name: translation
dtype:
translation:
languages:
- fr
- ru
splits:
- name: train
num_bytes: 8432125
num_examples: 76464
download_size: 2222427
dataset_size: 8432125
- config_name: fr-sv
features:
- name: translation
dtype:
translation:
languages:
- fr
- sv
splits:
- name: train
num_bytes: 6373414
num_examples: 77398
download_size: 2014028
dataset_size: 6373414
- config_name: fr-zh_CN
features:
- name: translation
dtype:
translation:
languages:
- fr
- zh_CN
splits:
- name: train
num_bytes: 5918538
num_examples: 68723
download_size: 1966020
dataset_size: 5918538
- config_name: ja-ru
features:
- name: translation
dtype:
translation:
languages:
- ja
- ru
splits:
- name: train
num_bytes: 8781286
num_examples: 68589
download_size: 2224576
dataset_size: 8781286
- config_name: ja-sv
features:
- name: translation
dtype:
translation:
languages:
- ja
- sv
splits:
- name: train
num_bytes: 6709683
num_examples: 69154
download_size: 2012693
dataset_size: 6709683
- config_name: ja-zh_CN
features:
- name: translation
dtype:
translation:
languages:
- ja
- zh_CN
splits:
- name: train
num_bytes: 6397732
num_examples: 68953
download_size: 1972833
dataset_size: 6397732
- config_name: ru-sv
features:
- name: translation
dtype:
translation:
languages:
- ru
- sv
splits:
- name: train
num_bytes: 7966214
num_examples: 75560
download_size: 2167678
dataset_size: 7966214
- config_name: ru-zh_CN
features:
- name: translation
dtype:
translation:
languages:
- ru
- zh_CN
splits:
- name: train
num_bytes: 7393715
num_examples: 66259
download_size: 2098229
dataset_size: 7393715
- config_name: sv-zh_CN
features:
- name: translation
dtype:
translation:
languages:
- sv
- zh_CN
splits:
- name: train
num_bytes: 5492958
num_examples: 68846
download_size: 1914096
dataset_size: 5492958
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[OpenOffice](http://opus.nlpl.eu/OpenOffice.php)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
A collection of documents from http://www.openoffice.org/.
8 languages, 28 bitexts
### Supported Tasks and Leaderboards
The underlying task is machine translation.
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@InProceedings{TIEDEMANN12.463,
author = {J�rg Tiedemann},
title = {Parallel Data, Tools and Interfaces in OPUS},
booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
year = {2012},
month = {may},
date = {23-25},
address = {Istanbul, Turkey},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
isbn = {978-2-9517408-7-7},
language = {english}
}
```
### Contributions
Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset. |
opus_paracrawl | ---
annotations_creators:
- found
language_creators:
- found
language:
- bg
- ca
- cs
- da
- de
- el
- en
- es
- et
- eu
- fi
- fr
- ga
- gl
- hr
- hu
- is
- it
- km
- ko
- lt
- lv
- mt
- my
- nb
- ne
- nl
- nn
- pl
- pt
- ro
- ru
- si
- sk
- sl
- so
- sv
- sw
- tl
- uk
- zh
license:
- cc0-1.0
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
- 10K<n<100K
- 1M<n<10M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: OpusParaCrawl
configs:
- de-pl
- el-en
- en-ha
- en-ig
- en-km
- en-so
- en-sw
- en-tl
- es-gl
- fr-nl
dataset_info:
- config_name: el-en
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- el
- en
splits:
- name: train
num_bytes: 6760375061
num_examples: 21402471
download_size: 2317102846
dataset_size: 6760375061
- config_name: en-ha
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- ha
splits:
- name: train
num_bytes: 4618460
num_examples: 19694
download_size: 1757433
dataset_size: 4618460
- config_name: en-ig
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- ig
splits:
- name: train
num_bytes: 6709030
num_examples: 28829
download_size: 2691716
dataset_size: 6709030
- config_name: en-km
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- km
splits:
- name: train
num_bytes: 31964493
num_examples: 65115
download_size: 9907279
dataset_size: 31964493
- config_name: en-so
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- so
splits:
- name: train
num_bytes: 5791003
num_examples: 14880
download_size: 2227727
dataset_size: 5791003
- config_name: de-pl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- pl
splits:
- name: train
num_bytes: 298637031
num_examples: 916643
download_size: 106891602
dataset_size: 298637031
- config_name: fr-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fr
- nl
splits:
- name: train
num_bytes: 862303220
num_examples: 2687673
download_size: 319804705
dataset_size: 862303220
- config_name: en-sw
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- sw
splits:
- name: train
num_bytes: 44264442
num_examples: 132520
download_size: 18611087
dataset_size: 44264442
- config_name: en-tl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- tl
splits:
- name: train
num_bytes: 82502798
num_examples: 248689
download_size: 32933118
dataset_size: 82502798
- config_name: es-gl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- gl
splits:
- name: train
num_bytes: 582660901
num_examples: 1879689
download_size: 236696353
dataset_size: 582660901
---
# Dataset Card for OpusParaCrawl
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/ParaCrawl.php
- **Repository:** None
- **Paper:** [ParaCrawl: Web-Scale Acquisition of Parallel Corpora](https://aclanthology.org/2020.acl-main.417/)
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
Parallel corpora from Web Crawls collected in the ParaCrawl project.
Tha dataset contains:
- 42 languages, 43 bitexts
- total number of files: 59,996
- total number of tokens: 56.11G
- total number of sentence fragments: 3.13G
To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs,
e.g.
```python
dataset = load_dataset("opus_paracrawl", lang1="en", lang2="so")
```
You can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/ParaCrawl.php
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The languages in the dataset are:
- bg
- ca
- cs
- da
- de
- el
- en
- es
- et
- eu
- fi
- fr
- ga
- gl
- hr
- hu
- is
- it
- km
- ko
- lt
- lv
- mt
- my
- nb
- ne
- nl
- nn
- pl
- pt
- ro
- ru
- si
- sk
- sl
- so
- sv
- sw
- tl
- uk
- zh
## Dataset Structure
### Data Instances
```
{
'id': '0',
'translation': {
"el": "Συνεχίστε ευθεία 300 μέτρα μέχρι να καταλήξουμε σε μια σωστή οδός (ul. Gagarina)? Περπατήστε περίπου 300 μέτρα μέχρι να φτάσετε το πρώτο ορθή οδός (ul Khotsa Namsaraeva)?",
"en": "Go straight 300 meters until you come to a proper street (ul. Gagarina); Walk approximately 300 meters until you reach the first proper street (ul Khotsa Namsaraeva);"
}
}
```
### Data Fields
- `id` (`str`): Unique identifier of the parallel sentence for the pair of languages.
- `translation` (`dict`): Parallel sentences for the pair of languages.
### Data Splits
The dataset contains a single `train` split.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
- Creative commons CC0 (no rights reserved)
### Citation Information
```bibtex
@inproceedings{banon-etal-2020-paracrawl,
title = "{P}ara{C}rawl: Web-Scale Acquisition of Parallel Corpora",
author = "Ba{\~n}{\'o}n, Marta and
Chen, Pinzhen and
Haddow, Barry and
Heafield, Kenneth and
Hoang, Hieu and
Espl{\`a}-Gomis, Miquel and
Forcada, Mikel L. and
Kamran, Amir and
Kirefu, Faheem and
Koehn, Philipp and
Ortiz Rojas, Sergio and
Pla Sempere, Leopoldo and
Ram{\'\i}rez-S{\'a}nchez, Gema and
Sarr{\'\i}as, Elsa and
Strelec, Marek and
Thompson, Brian and
Waites, William and
Wiggins, Dion and
Zaragoza, Jaume",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.acl-main.417",
doi = "10.18653/v1/2020.acl-main.417",
pages = "4555--4567",
}
```
```bibtex
@InProceedings{TIEDEMANN12.463,
author = {Jörg Tiedemann},
title = {Parallel Data, Tools and Interfaces in OPUS},
booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
year = {2012},
month = {may},
date = {23-25},
address = {Istanbul, Turkey},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Uğur Doğan and Bente Maegaard and Joseph Mariani and Asuncion Moreno and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
isbn = {978-2-9517408-7-7},
language = {english}
}
```
### Contributions
Thanks to [@rkc007](https://github.com/rkc007) for adding this dataset. |
opus_rf | ---
annotations_creators:
- found
language_creators:
- expert-generated
language:
- de
- en
- es
- fr
- sv
license:
- unknown
multilinguality:
- multilingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: OpusRf
configs:
- de-en
- de-es
- de-fr
- de-sv
- en-es
- en-fr
- en-sv
- es-fr
- es-sv
- fr-sv
dataset_info:
- config_name: de-en
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- en
splits:
- name: train
num_bytes: 38683
num_examples: 177
download_size: 16029
dataset_size: 38683
- config_name: de-es
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- es
splits:
- name: train
num_bytes: 2316
num_examples: 24
download_size: 2403
dataset_size: 2316
- config_name: de-fr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- fr
splits:
- name: train
num_bytes: 41300
num_examples: 173
download_size: 16720
dataset_size: 41300
- config_name: de-sv
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- sv
splits:
- name: train
num_bytes: 37414
num_examples: 178
download_size: 15749
dataset_size: 37414
- config_name: en-es
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- es
splits:
- name: train
num_bytes: 2600
num_examples: 25
download_size: 2485
dataset_size: 2600
- config_name: en-fr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- fr
splits:
- name: train
num_bytes: 39503
num_examples: 175
download_size: 16038
dataset_size: 39503
- config_name: en-sv
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- sv
splits:
- name: train
num_bytes: 35778
num_examples: 180
download_size: 15147
dataset_size: 35778
- config_name: es-fr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- fr
splits:
- name: train
num_bytes: 2519
num_examples: 21
download_size: 2469
dataset_size: 2519
- config_name: es-sv
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- sv
splits:
- name: train
num_bytes: 3110
num_examples: 28
download_size: 2726
dataset_size: 3110
- config_name: fr-sv
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fr
- sv
splits:
- name: train
num_bytes: 38627
num_examples: 175
download_size: 15937
dataset_size: 38627
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/RF.php
- **Repository:**
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
RF is a tiny parallel corpus of the Declarations of the Swedish Government and its translations.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English (en), Spanish (es), German (de), French (fr), Swedish (sv)
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@InProceedings{TIEDEMANN12.463,
author = {J{\"o}rg Tiedemann},
title = {Parallel Data, Tools and Interfaces in OPUS},
booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
year = {2012},
month = {may},
date = {23-25},
address = {Istanbul, Turkey},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
isbn = {978-2-9517408-7-7},
language = {english}
}
```
### Contributions
Thanks to [@akshayb7](https://github.com/akshayb7) for adding this dataset. |
opus_tedtalks | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
- hr
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: OpusTedtalks
dataset_info:
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- hr
config_name: en-hr
splits:
- name: train
num_bytes: 15249417
num_examples: 86348
download_size: 5639306
dataset_size: 15249417
---
# Dataset Card for OpusTedtalks
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/TedTalks.php
- **Repository:** None
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
This is a Croatian-English parallel corpus of transcribed and translated TED talks, originally extracted from https://wit3.fbk.eu. The corpus is compiled by Željko Agić and is taken from http://lt.ffzg.hr/zagic provided under the CC-BY-NC-SA license. This corpus is sentence aligned for both language pairs. The documents were collected and aligned using the Hunalign algorithm.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[CC-BY-NC-SA license]<http://creativecommons.org/licenses/by-sa/3.0/>
### Citation Information
@InProceedings{TIEDEMANN12.463,
author = {J{\"o}rg Tiedemann},
title = {Parallel Data, Tools and Interfaces in OPUS},
booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
year = {2012},
month = {may},
date = {23-25},
address = {Istanbul, Turkey},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
isbn = {978-2-9517408-7-7},
language = {english}
}
### Contributions
Thanks to [@rkc007](https://github.com/rkc007) for adding this dataset. |
opus_ubuntu | ---
annotations_creators:
- crowdsourced
- expert-generated
language_creators:
- found
language:
- ace
- af
- ak
- am
- an
- ang
- ar
- ary
- as
- ast
- az
- ba
- bal
- be
- bem
- ber
- bg
- bho
- bn
- bo
- br
- brx
- bs
- bua
- byn
- ca
- ce
- ceb
- chr
- ckb
- co
- crh
- cs
- csb
- cv
- cy
- da
- de
- dsb
- dv
- dz
- el
- en
- eo
- es
- et
- eu
- fa
- ff
- fi
- fil
- fo
- fr
- frm
- frp
- fur
- fy
- ga
- gd
- gl
- gn
- grc
- gu
- guc
- gv
- ha
- haw
- he
- hi
- hil
- hne
- hr
- hsb
- ht
- hu
- hy
- ia
- id
- ig
- io
- is
- it
- iu
- ja
- jbo
- jv
- ka
- kab
- kg
- kk
- kl
- km
- kn
- ko
- kok
- ks
- ksh
- ku
- kw
- ky
- la
- lb
- lg
- li
- lij
- lld
- ln
- lo
- lt
- ltg
- lv
- mai
- mg
- mh
- mhr
- mi
- miq
- mk
- ml
- mn
- mr
- ms
- mt
- mus
- my
- nan
- nap
- nb
- nds
- ne
- nhn
- nl
- nn
- 'no'
- nso
- ny
- oc
- om
- or
- os
- pa
- pam
- pap
- pl
- pms
- pmy
- ps
- pt
- qu
- rm
- ro
- rom
- ru
- rw
- sa
- sc
- sco
- sd
- se
- shn
- shs
- si
- sk
- sl
- sm
- sml
- sn
- so
- son
- sq
- sr
- st
- sv
- sw
- syr
- szl
- ta
- te
- tet
- tg
- th
- ti
- tk
- tl
- tlh
- tr
- trv
- ts
- tt
- ug
- uk
- ur
- uz
- ve
- vec
- vi
- wa
- wae
- wo
- xal
- xh
- yi
- yo
- zh
- zu
- zza
language_bcp47:
- ar-SY
- bn-IN
- de-AT
- de-DE
- en-AU
- en-CA
- en-GB
- en-NZ
- en-US
- es-AR
- es-CL
- es-CO
- es-CR
- es-DO
- es-EC
- es-ES
- es-GT
- es-HN
- es-MX
- es-NI
- es-PA
- es-PE
- es-PR
- es-SV
- es-UY
- es-VE
- fa-AF
- fr-CA
- fr-FR
- nl-NL
- pt-BR
- pt-PT
- ta-LK
- zh-CN
- zh-HK
- zh-TW
license:
- bsd-3-clause
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
- 1K<n<10K
- n<1K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: Opus Ubuntu
configs:
- as-bs
- az-cs
- bg-de
- bn-ga
- br-es_PR
- br-hi
- br-la
- br-uz
- br-yi
- bs-szl
dataset_info:
- config_name: as-bs
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- as
- bs
splits:
- name: train
num_bytes: 1037811
num_examples: 8583
download_size: 229723
dataset_size: 1037811
- config_name: az-cs
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- az
- cs
splits:
- name: train
num_bytes: 17821
num_examples: 293
download_size: 9501
dataset_size: 17821
- config_name: bg-de
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- bg
- de
splits:
- name: train
num_bytes: 27627
num_examples: 184
download_size: 9994
dataset_size: 27627
- config_name: br-es_PR
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- br
- es_PR
splits:
- name: train
num_bytes: 8875
num_examples: 125
download_size: 5494
dataset_size: 8875
- config_name: bn-ga
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- bn
- ga
splits:
- name: train
num_bytes: 584629
num_examples: 7324
download_size: 142710
dataset_size: 584629
- config_name: br-hi
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- br
- hi
splits:
- name: train
num_bytes: 1300081
num_examples: 15551
download_size: 325415
dataset_size: 1300081
- config_name: br-la
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- br
- la
splits:
- name: train
num_bytes: 29341
num_examples: 527
download_size: 11565
dataset_size: 29341
- config_name: bs-szl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- bs
- szl
splits:
- name: train
num_bytes: 41116
num_examples: 646
download_size: 18134
dataset_size: 41116
- config_name: br-uz
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- br
- uz
splits:
- name: train
num_bytes: 110278
num_examples: 1416
download_size: 33595
dataset_size: 110278
- config_name: br-yi
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- br
- yi
splits:
- name: train
num_bytes: 172846
num_examples: 2799
download_size: 41956
dataset_size: 172846
---
# Dataset Card for Opus Ubuntu
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/Ubuntu.php
- **Repository:** None
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
These are translations of the Ubuntu software package messages, donated by the Ubuntu community.
To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.
You can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/Ubuntu.php
E.g.
`dataset = load_dataset("opus_ubuntu", lang1="it", lang2="pl")`
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
Example instance:
```
{
'id': '0',
'translation': {
'it': 'Comprende Gmail, Google Docs, Google+, YouTube e Picasa',
'pl': 'Zawiera Gmail, Google Docs, Google+, YouTube oraz Picasa'
}
}
```
### Data Fields
Each instance has two fields:
- **id**: the id of the example
- **translation**: a dictionary containing translated texts in two languages.
### Data Splits
Each subset simply consists in a train set. We provide the number of examples for certain language pairs:
| | train |
|:---------|--------:|
| as-bs | 8583 |
| az-cs | 293 |
| bg-de | 184 |
| br-es_PR | 125 |
| bn-ga | 7324 |
| br-hi | 15551 |
| br-la | 527 |
| bs-szl | 646 |
| br-uz | 1416 |
| br-yi | 2799 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
BSD "Revised" license (see (https://help.launchpad.net/Legal#Translations_copyright)[https://help.launchpad.net/Legal#Translations_copyright])
### Citation Information
```bibtex
@InProceedings{TIEDEMANN12.463,
author = {J{\"o}rg Tiedemann},
title = {Parallel Data, Tools and Interfaces in OPUS},
booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
year = {2012},
month = {may},
date = {23-25},
address = {Istanbul, Turkey},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
isbn = {978-2-9517408-7-7},
language = {english}
}
```
### Contributions
Thanks to [@rkc007](https://github.com/rkc007) for adding this dataset. |
opus_wikipedia | ---
annotations_creators:
- found
language_creators:
- found
language:
- ar
- bg
- cs
- de
- el
- en
- es
- fa
- fr
- he
- hu
- it
- nl
- pl
- pt
- ro
- ru
- sl
- tr
- vi
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: OpusWikipedia
configs:
- ar-en
- ar-pl
- en-ru
- en-sl
- en-vi
dataset_info:
- config_name: ar-en
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ar
- en
splits:
- name: train
num_bytes: 45207715
num_examples: 151136
download_size: 16097997
dataset_size: 45207715
- config_name: ar-pl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ar
- pl
splits:
- name: train
num_bytes: 304851676
num_examples: 823715
download_size: 104585718
dataset_size: 304851676
- config_name: en-sl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- sl
splits:
- name: train
num_bytes: 30479739
num_examples: 140124
download_size: 11727538
dataset_size: 30479739
- config_name: en-ru
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- ru
splits:
- name: train
num_bytes: 167649057
num_examples: 572717
download_size: 57356138
dataset_size: 167649057
- config_name: en-vi
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- vi
splits:
- name: train
num_bytes: 7571598
num_examples: 58116
download_size: 2422413
dataset_size: 7571598
---
# Dataset Card for OpusWikipedia
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/Wikipedia.php
- **Repository:** None
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
This is a corpus of parallel sentences extracted from Wikipedia by Krzysztof Wołk and Krzysztof Marasek.
Tha dataset contains 20 languages and 36 bitexts.
To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs,
e.g.
```python
dataset = load_dataset("opus_wikipedia", lang1="it", lang2="pl")
```
You can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/Wikipedia.php
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The languages in the dataset are:
- ar
- bg
- cs
- de
- el
- en
- es
- fa
- fr
- he
- hu
- it
- nl
- pl
- pt
- ro
- ru
- sl
- tr
- vi
## Dataset Structure
### Data Instances
```
{
'id': '0',
'translation': {
"ar": "* Encyclopaedia of Mathematics online encyclopaedia from Springer, Graduate-level reference work with over 8,000 entries, illuminating nearly 50,000 notions in mathematics.",
"en": "*Encyclopaedia of Mathematics online encyclopaedia from Springer, Graduate-level reference work with over 8,000 entries, illuminating nearly 50,000 notions in mathematics."
}
}
```
### Data Fields
- `id` (`str`): Unique identifier of the parallel sentence for the pair of languages.
- `translation` (`dict`): Parallel sentences for the pair of languages.
### Data Splits
The dataset contains a single `train` split.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```bibtex
@article{WOLK2014126,
title = {Building Subject-aligned Comparable Corpora and Mining it for Truly Parallel Sentence Pairs},
journal = {Procedia Technology},
volume = {18},
pages = {126-132},
year = {2014},
note = {International workshop on Innovations in Information and Communication Science and Technology, IICST 2014, 3-5 September 2014, Warsaw, Poland},
issn = {2212-0173},
doi = {https://doi.org/10.1016/j.protcy.2014.11.024},
url = {https://www.sciencedirect.com/science/article/pii/S2212017314005453},
author = {Krzysztof Wołk and Krzysztof Marasek},
keywords = {Comparable corpora, machine translation, NLP},
}
```
```bibtex
@InProceedings{TIEDEMANN12.463,
author = {J{\"o}rg Tiedemann},
title = {Parallel Data, Tools and Interfaces in OPUS},
booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
year = {2012},
month = {may},
date = {23-25},
address = {Istanbul, Turkey},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
isbn = {978-2-9517408-7-7},
language = {english}
}
```
### Contributions
Thanks to [@rkc007](https://github.com/rkc007) for adding this dataset. |
opus_xhosanavy | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
- xh
license:
- unknown
multilinguality:
- translation
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: OpusXhosanavy
dataset_info:
features:
- name: translation
dtype:
translation:
languages:
- en
- xh
config_name: en-xh
splits:
- name: train
num_bytes: 9654422
num_examples: 49982
download_size: 3263865
dataset_size: 9654422
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**[XhosaNavy](http://opus.nlpl.eu/XhosaNavy-v1.php)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This corpus is part of OPUS - the open collection of parallel corpora
OPUS Website: http://opus.nlpl.eu
### Supported Tasks and Leaderboards
The underlying task is machine translation from English to Xhosa
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
### Contributions
Thanks to [@lhoestq](https://github.com/lhoestq), [@spatil6](https://github.com/spatil6) for adding this dataset. |
orange_sum | ---
pretty_name: OrangeSum
annotations_creators:
- found
language_creators:
- found
language:
- fr
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-headline-generation
- news-articles-summarization
paperswithcode_id: orangesum
dataset_info:
- config_name: abstract
features:
- name: text
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 53531651
num_examples: 21401
- name: test
num_bytes: 3785207
num_examples: 1500
- name: validation
num_bytes: 3698650
num_examples: 1500
download_size: 23058350
dataset_size: 61015508
- config_name: title
features:
- name: text
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 65225136
num_examples: 30659
- name: test
num_bytes: 3176690
num_examples: 1500
- name: validation
num_bytes: 3276713
num_examples: 1500
download_size: 27321627
dataset_size: 71678539
---
# Dataset Card for OrangeSum
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [OrangeSum repository](https://github.com/Tixierae/OrangeSum)
- **Paper:** [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321)
- **Point of Contact:** [Antoine J.-P. Tixier](Antoine.Tixier-1@colorado.edu)
### Dataset Summary
The OrangeSum dataset was inspired by the XSum dataset. It was created by scraping the "Orange Actu" website: https://actu.orange.fr/. Orange S.A. is a large French multinational telecommunications corporation, with 266M customers worldwide. Scraped pages cover almost a decade from Feb 2011 to Sep 2020. They belong to five main categories: France, world, politics, automotive, and society. The society category is itself divided into 8 subcategories: health, environment, people, culture, media, high-tech, unsual ("insolite" in French), and miscellaneous.
Each article featured a single-sentence title as well as a very brief abstract, both professionally written by the author of the article. These two fields were extracted from each page, thus creating two summarization tasks: OrangeSum Title and OrangeSum Abstract.
### Supported Tasks and Leaderboards
**Tasks:** OrangeSum Title and OrangeSum Abstract.
To this day, there is no Leaderboard for this dataset.
### Languages
The text in the dataset is in French.
## Dataset Structure
### Data Instances
A data instance consists of a news article and a summary. The summary can be a short abstract or a title depending on the configuration.
Example:
**Document:** Le temps sera pluvieux sur huit départements de la France ces prochaines heures : outre les trois départements bretons placés en vigilance orange jeudi matin, cinq autres départements du sud du Massif Central ont été à leur tour placés en alerte orange pluie et inondation. Il s'agit de l'Aveyron, du Cantal, du Gard, de la Lozère, et de la Haute-Loire. Sur l'ensemble de l'épisode, les cumuls de pluies attendus en Bretagne sont compris entre 40 et 60 mm en 24 heures et peuvent atteindre localement les 70 mm en 24 heures.Par la suite, la dégradation qui va se mettre en place cette nuit sur le Languedoc et le sud du Massif Central va donner sur l'Aveyron une première salve intense de pluie. Des cumuls entre 70 et 100 mm voir 120 mm localement sont attendus sur une durée de 24 heures. Sur le relief des Cévennes on attend de 150 à 200 mm, voire 250 mm très ponctuellement sur l'ouest du Gard et l'est de la Lozère. Cet épisode va s'estomper dans la soirée avec le décalage des orages vers les régions plus au nord. Un aspect orageux se mêlera à ces précipitations, avec de la grêle possible, des rafales de vent et une forte activité électrique.
**Abstract:** Outre les trois départements bretons, cinq autres départements du centre de la France ont été placés en vigilance orange pluie-inondation.
**Title:** Pluie-inondations : 8 départements en alerte orange.
### Data Fields
`text`: the document to be summarized. \
`summary`: the summary of the source document.
### Data Splits
The data is split into a training, validation and test in both configuration.
| | train | validation | test |
|----------|------:|-----------:|-----:|
| Abstract | 21400 | 1500 | 1500 |
| Title | 30658 | 1500 | 1500 |
## Dataset Creation
### Curation Rationale
The goal here was to create a French equivalent of the recently introduced [XSum](https://github.com/EdinburghNLP/XSum/tree/master/XSum-Dataset) dataset. Unlike the historical summarization datasets, CNN, DailyMail, and NY Times, which favor extractive strategies, XSum, as well as OrangeSum require the models to display a high degree of abstractivity to perform well. The summaries in OrangeSum are not catchy headlines, but rather capture the gist of the articles.
### Source Data
#### Initial Data Collection and Normalization
Each article features a single-sentence title as well as a very brief abstract. Extracting these two fields from each news article page, creates two summarization tasks: OrangeSum Title and OrangeSum Abstract. As a post-processing step, all empty articles and those whose summaries were shorter than 5 words were removed. For OrangeSum Abstract, the top 10% articles in terms of proportion of novel unigrams in the abstracts were removed, as it was observed that such abstracts tend to be introductions rather than real abstracts. This corresponded to a threshold of 57% novel unigrams. For both OrangeSum Title and OrangeSum Abstract, 1500 pairs for testing and 1500 for validation are set aside, and all the remaining ones are used for training.
#### Who are the source language producers?
The authors of the artiles.
### Annotations
#### Annotation process
The smmaries are professionally written by the author of the articles.
#### Who are the annotators?
The authors of the artiles.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The dataset was initially created by Antoine J.-P. Tixier.
### Licensing Information
[More Information Needed]
### Citation Information
```
@article{eddine2020barthez,
title={BARThez: a Skilled Pretrained French Sequence-to-Sequence Model},
author={Eddine, Moussa Kamal and Tixier, Antoine J-P and Vazirgiannis, Michalis},
journal={arXiv preprint arXiv:2010.12321},
year={2020}
}
```
### Contributions
Thanks to [@moussaKam](https://github.com/moussaKam) for adding this dataset. |
oscar | ---
pretty_name: OSCAR
annotations_creators:
- no-annotation
language_creators:
- found
language:
- af
- als
- am
- an
- ar
- arz
- as
- ast
- av
- az
- azb
- ba
- bar
- bcl
- be
- bg
- bh
- bn
- bo
- bpy
- br
- bs
- bxr
- ca
- cbk
- ce
- ceb
- ckb
- cs
- cv
- cy
- da
- de
- diq
- dsb
- dv
- el
- eml
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- frr
- fy
- ga
- gd
- gl
- gn
- gom
- gu
- he
- hi
- hr
- hsb
- ht
- hu
- hy
- ia
- id
- ie
- ilo
- io
- is
- it
- ja
- jbo
- jv
- ka
- kk
- km
- kn
- ko
- krc
- ku
- kv
- kw
- ky
- la
- lb
- lez
- li
- lmo
- lo
- lrc
- lt
- lv
- mai
- mg
- mhr
- min
- mk
- ml
- mn
- mr
- mrj
- ms
- mt
- mwl
- my
- myv
- mzn
- nah
- nap
- nds
- ne
- new
- nl
- nn
- 'no'
- oc
- or
- os
- pa
- pam
- pl
- pms
- pnb
- ps
- pt
- qu
- rm
- ro
- ru
- sa
- sah
- scn
- sd
- sh
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- tg
- th
- tk
- tl
- tr
- tt
- tyv
- ug
- uk
- ur
- uz
- vec
- vi
- vo
- wa
- war
- wuu
- xal
- xmf
- yi
- yo
- yue
- zh
license:
- cc0-1.0
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
- 100M<n<1B
- 10K<n<100K
- 10M<n<100M
- 1K<n<10K
- 1M<n<10M
- n<1K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: oscar
configs:
- unshuffled_deduplicated_af
- unshuffled_deduplicated_als
- unshuffled_deduplicated_am
- unshuffled_deduplicated_an
- unshuffled_deduplicated_ar
- unshuffled_deduplicated_arz
- unshuffled_deduplicated_as
- unshuffled_deduplicated_ast
- unshuffled_deduplicated_av
- unshuffled_deduplicated_az
- unshuffled_deduplicated_azb
- unshuffled_deduplicated_ba
- unshuffled_deduplicated_bar
- unshuffled_deduplicated_bcl
- unshuffled_deduplicated_be
- unshuffled_deduplicated_bg
- unshuffled_deduplicated_bh
- unshuffled_deduplicated_bn
- unshuffled_deduplicated_bo
- unshuffled_deduplicated_bpy
- unshuffled_deduplicated_br
- unshuffled_deduplicated_bs
- unshuffled_deduplicated_bxr
- unshuffled_deduplicated_ca
- unshuffled_deduplicated_cbk
- unshuffled_deduplicated_ce
- unshuffled_deduplicated_ceb
- unshuffled_deduplicated_ckb
- unshuffled_deduplicated_cs
- unshuffled_deduplicated_cv
- unshuffled_deduplicated_cy
- unshuffled_deduplicated_da
- unshuffled_deduplicated_de
- unshuffled_deduplicated_diq
- unshuffled_deduplicated_dsb
- unshuffled_deduplicated_dv
- unshuffled_deduplicated_el
- unshuffled_deduplicated_eml
- unshuffled_deduplicated_en
- unshuffled_deduplicated_eo
- unshuffled_deduplicated_es
- unshuffled_deduplicated_et
- unshuffled_deduplicated_eu
- unshuffled_deduplicated_fa
- unshuffled_deduplicated_fi
- unshuffled_deduplicated_fr
- unshuffled_deduplicated_frr
- unshuffled_deduplicated_fy
- unshuffled_deduplicated_ga
- unshuffled_deduplicated_gd
- unshuffled_deduplicated_gl
- unshuffled_deduplicated_gn
- unshuffled_deduplicated_gom
- unshuffled_deduplicated_gu
- unshuffled_deduplicated_he
- unshuffled_deduplicated_hi
- unshuffled_deduplicated_hr
- unshuffled_deduplicated_hsb
- unshuffled_deduplicated_ht
- unshuffled_deduplicated_hu
- unshuffled_deduplicated_hy
- unshuffled_deduplicated_ia
- unshuffled_deduplicated_id
- unshuffled_deduplicated_ie
- unshuffled_deduplicated_ilo
- unshuffled_deduplicated_io
- unshuffled_deduplicated_is
- unshuffled_deduplicated_it
- unshuffled_deduplicated_ja
- unshuffled_deduplicated_jbo
- unshuffled_deduplicated_jv
- unshuffled_deduplicated_ka
- unshuffled_deduplicated_kk
- unshuffled_deduplicated_km
- unshuffled_deduplicated_kn
- unshuffled_deduplicated_ko
- unshuffled_deduplicated_krc
- unshuffled_deduplicated_ku
- unshuffled_deduplicated_kv
- unshuffled_deduplicated_kw
- unshuffled_deduplicated_ky
- unshuffled_deduplicated_la
- unshuffled_deduplicated_lb
- unshuffled_deduplicated_lez
- unshuffled_deduplicated_li
- unshuffled_deduplicated_lmo
- unshuffled_deduplicated_lo
- unshuffled_deduplicated_lrc
- unshuffled_deduplicated_lt
- unshuffled_deduplicated_lv
- unshuffled_deduplicated_mai
- unshuffled_deduplicated_mg
- unshuffled_deduplicated_mhr
- unshuffled_deduplicated_min
- unshuffled_deduplicated_mk
- unshuffled_deduplicated_ml
- unshuffled_deduplicated_mn
- unshuffled_deduplicated_mr
- unshuffled_deduplicated_mrj
- unshuffled_deduplicated_ms
- unshuffled_deduplicated_mt
- unshuffled_deduplicated_mwl
- unshuffled_deduplicated_my
- unshuffled_deduplicated_myv
- unshuffled_deduplicated_mzn
- unshuffled_deduplicated_nah
- unshuffled_deduplicated_nap
- unshuffled_deduplicated_nds
- unshuffled_deduplicated_ne
- unshuffled_deduplicated_new
- unshuffled_deduplicated_nl
- unshuffled_deduplicated_nn
- unshuffled_deduplicated_no
- unshuffled_deduplicated_oc
- unshuffled_deduplicated_or
- unshuffled_deduplicated_os
- unshuffled_deduplicated_pa
- unshuffled_deduplicated_pam
- unshuffled_deduplicated_pl
- unshuffled_deduplicated_pms
- unshuffled_deduplicated_pnb
- unshuffled_deduplicated_ps
- unshuffled_deduplicated_pt
- unshuffled_deduplicated_qu
- unshuffled_deduplicated_rm
- unshuffled_deduplicated_ro
- unshuffled_deduplicated_ru
- unshuffled_deduplicated_sa
- unshuffled_deduplicated_sah
- unshuffled_deduplicated_scn
- unshuffled_deduplicated_sd
- unshuffled_deduplicated_sh
- unshuffled_deduplicated_si
- unshuffled_deduplicated_sk
- unshuffled_deduplicated_sl
- unshuffled_deduplicated_so
- unshuffled_deduplicated_sq
- unshuffled_deduplicated_sr
- unshuffled_deduplicated_su
- unshuffled_deduplicated_sv
- unshuffled_deduplicated_sw
- unshuffled_deduplicated_ta
- unshuffled_deduplicated_te
- unshuffled_deduplicated_tg
- unshuffled_deduplicated_th
- unshuffled_deduplicated_tk
- unshuffled_deduplicated_tl
- unshuffled_deduplicated_tr
- unshuffled_deduplicated_tt
- unshuffled_deduplicated_tyv
- unshuffled_deduplicated_ug
- unshuffled_deduplicated_uk
- unshuffled_deduplicated_ur
- unshuffled_deduplicated_uz
- unshuffled_deduplicated_vec
- unshuffled_deduplicated_vi
- unshuffled_deduplicated_vo
- unshuffled_deduplicated_wa
- unshuffled_deduplicated_war
- unshuffled_deduplicated_wuu
- unshuffled_deduplicated_xal
- unshuffled_deduplicated_xmf
- unshuffled_deduplicated_yi
- unshuffled_deduplicated_yo
- unshuffled_deduplicated_yue
- unshuffled_deduplicated_zh
- unshuffled_original_af
- unshuffled_original_als
- unshuffled_original_am
- unshuffled_original_an
- unshuffled_original_ar
- unshuffled_original_arz
- unshuffled_original_as
- unshuffled_original_ast
- unshuffled_original_av
- unshuffled_original_az
- unshuffled_original_azb
- unshuffled_original_ba
- unshuffled_original_bar
- unshuffled_original_bcl
- unshuffled_original_be
- unshuffled_original_bg
- unshuffled_original_bh
- unshuffled_original_bn
- unshuffled_original_bo
- unshuffled_original_bpy
- unshuffled_original_br
- unshuffled_original_bs
- unshuffled_original_bxr
- unshuffled_original_ca
- unshuffled_original_cbk
- unshuffled_original_ce
- unshuffled_original_ceb
- unshuffled_original_ckb
- unshuffled_original_cs
- unshuffled_original_cv
- unshuffled_original_cy
- unshuffled_original_da
- unshuffled_original_de
- unshuffled_original_diq
- unshuffled_original_dsb
- unshuffled_original_dv
- unshuffled_original_el
- unshuffled_original_eml
- unshuffled_original_en
- unshuffled_original_eo
- unshuffled_original_es
- unshuffled_original_et
- unshuffled_original_eu
- unshuffled_original_fa
- unshuffled_original_fi
- unshuffled_original_fr
- unshuffled_original_frr
- unshuffled_original_fy
- unshuffled_original_ga
- unshuffled_original_gd
- unshuffled_original_gl
- unshuffled_original_gn
- unshuffled_original_gom
- unshuffled_original_gu
- unshuffled_original_he
- unshuffled_original_hi
- unshuffled_original_hr
- unshuffled_original_hsb
- unshuffled_original_ht
- unshuffled_original_hu
- unshuffled_original_hy
- unshuffled_original_ia
- unshuffled_original_id
- unshuffled_original_ie
- unshuffled_original_ilo
- unshuffled_original_io
- unshuffled_original_is
- unshuffled_original_it
- unshuffled_original_ja
- unshuffled_original_jbo
- unshuffled_original_jv
- unshuffled_original_ka
- unshuffled_original_kk
- unshuffled_original_km
- unshuffled_original_kn
- unshuffled_original_ko
- unshuffled_original_krc
- unshuffled_original_ku
- unshuffled_original_kv
- unshuffled_original_kw
- unshuffled_original_ky
- unshuffled_original_la
- unshuffled_original_lb
- unshuffled_original_lez
- unshuffled_original_li
- unshuffled_original_lmo
- unshuffled_original_lo
- unshuffled_original_lrc
- unshuffled_original_lt
- unshuffled_original_lv
- unshuffled_original_mai
- unshuffled_original_mg
- unshuffled_original_mhr
- unshuffled_original_min
- unshuffled_original_mk
- unshuffled_original_ml
- unshuffled_original_mn
- unshuffled_original_mr
- unshuffled_original_mrj
- unshuffled_original_ms
- unshuffled_original_mt
- unshuffled_original_mwl
- unshuffled_original_my
- unshuffled_original_myv
- unshuffled_original_mzn
- unshuffled_original_nah
- unshuffled_original_nap
- unshuffled_original_nds
- unshuffled_original_ne
- unshuffled_original_new
- unshuffled_original_nl
- unshuffled_original_nn
- unshuffled_original_no
- unshuffled_original_oc
- unshuffled_original_or
- unshuffled_original_os
- unshuffled_original_pa
- unshuffled_original_pam
- unshuffled_original_pl
- unshuffled_original_pms
- unshuffled_original_pnb
- unshuffled_original_ps
- unshuffled_original_pt
- unshuffled_original_qu
- unshuffled_original_rm
- unshuffled_original_ro
- unshuffled_original_ru
- unshuffled_original_sa
- unshuffled_original_sah
- unshuffled_original_scn
- unshuffled_original_sd
- unshuffled_original_sh
- unshuffled_original_si
- unshuffled_original_sk
- unshuffled_original_sl
- unshuffled_original_so
- unshuffled_original_sq
- unshuffled_original_sr
- unshuffled_original_su
- unshuffled_original_sv
- unshuffled_original_sw
- unshuffled_original_ta
- unshuffled_original_te
- unshuffled_original_tg
- unshuffled_original_th
- unshuffled_original_tk
- unshuffled_original_tl
- unshuffled_original_tr
- unshuffled_original_tt
- unshuffled_original_tyv
- unshuffled_original_ug
- unshuffled_original_uk
- unshuffled_original_ur
- unshuffled_original_uz
- unshuffled_original_vec
- unshuffled_original_vi
- unshuffled_original_vo
- unshuffled_original_wa
- unshuffled_original_war
- unshuffled_original_wuu
- unshuffled_original_xal
- unshuffled_original_xmf
- unshuffled_original_yi
- unshuffled_original_yo
- unshuffled_original_yue
- unshuffled_original_zh
dataset_info:
- config_name: unshuffled_deduplicated_af
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 171320914
num_examples: 130640
download_size: 65989254
dataset_size: 171320914
- config_name: unshuffled_deduplicated_als
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2915912
num_examples: 4518
download_size: 1263294
dataset_size: 2915912
- config_name: unshuffled_deduplicated_arz
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 34893248
num_examples: 79928
download_size: 10027493
dataset_size: 34893248
- config_name: unshuffled_deduplicated_an
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 842246
num_examples: 2025
download_size: 133373
dataset_size: 842246
- config_name: unshuffled_deduplicated_ast
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2150022
num_examples: 5343
download_size: 856177
dataset_size: 2150022
- config_name: unshuffled_deduplicated_ba
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 93623739
num_examples: 27050
download_size: 25983491
dataset_size: 93623739
- config_name: unshuffled_deduplicated_am
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 215618603
num_examples: 43102
download_size: 61347279
dataset_size: 215618603
- config_name: unshuffled_deduplicated_as
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 73989818
num_examples: 9212
download_size: 15513004
dataset_size: 73989818
- config_name: unshuffled_deduplicated_azb
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 20001183
num_examples: 9985
download_size: 5191704
dataset_size: 20001183
- config_name: unshuffled_deduplicated_be
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1077152244
num_examples: 307405
download_size: 306700943
dataset_size: 1077152244
- config_name: unshuffled_deduplicated_bo
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 144506264
num_examples: 15762
download_size: 22365048
dataset_size: 144506264
- config_name: unshuffled_deduplicated_bxr
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 11325
num_examples: 36
download_size: 3666
dataset_size: 11325
- config_name: unshuffled_deduplicated_ceb
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 24439249
num_examples: 26145
download_size: 7124786
dataset_size: 24439249
- config_name: unshuffled_deduplicated_az
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1526935070
num_examples: 626796
download_size: 521744076
dataset_size: 1526935070
- config_name: unshuffled_deduplicated_bcl
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 900
num_examples: 1
download_size: 594
dataset_size: 900
- config_name: unshuffled_deduplicated_cy
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 140412555
num_examples: 98225
download_size: 53629697
dataset_size: 140412555
- config_name: unshuffled_deduplicated_dsb
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 7589
num_examples: 37
download_size: 3640
dataset_size: 7589
- config_name: unshuffled_deduplicated_bn
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 6233041155
num_examples: 1114481
download_size: 1257218381
dataset_size: 6233041155
- config_name: unshuffled_deduplicated_bs
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 125977
num_examples: 702
download_size: 38669
dataset_size: 125977
- config_name: unshuffled_deduplicated_ce
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 7021674
num_examples: 2984
download_size: 1862792
dataset_size: 7021674
- config_name: unshuffled_deduplicated_cv
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 27359554
num_examples: 10130
download_size: 7461982
dataset_size: 27359554
- config_name: unshuffled_deduplicated_diq
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 161
num_examples: 1
download_size: 331
dataset_size: 161
- config_name: unshuffled_deduplicated_eml
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 24657
num_examples: 80
download_size: 10055
dataset_size: 24657
- config_name: unshuffled_deduplicated_et
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2434152666
num_examples: 1172041
download_size: 966785545
dataset_size: 2434152666
- config_name: unshuffled_deduplicated_bg
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 14420684170
num_examples: 3398679
download_size: 3848659853
dataset_size: 14420684170
- config_name: unshuffled_deduplicated_bpy
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1725535
num_examples: 1770
download_size: 191472
dataset_size: 1725535
- config_name: unshuffled_deduplicated_ca
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 4544123629
num_examples: 2458067
download_size: 1734548117
dataset_size: 4544123629
- config_name: unshuffled_deduplicated_ckb
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 237229156
num_examples: 68210
download_size: 60319928
dataset_size: 237229156
- config_name: unshuffled_deduplicated_ar
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 33468271639
num_examples: 9006977
download_size: 9667185012
dataset_size: 33468271639
- config_name: unshuffled_deduplicated_av
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 334755
num_examples: 360
download_size: 75341
dataset_size: 334755
- config_name: unshuffled_deduplicated_bar
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 551
num_examples: 4
download_size: 354
dataset_size: 551
- config_name: unshuffled_deduplicated_bh
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 35216
num_examples: 82
download_size: 6003
dataset_size: 35216
- config_name: unshuffled_deduplicated_br
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 16712284
num_examples: 14724
download_size: 6468062
dataset_size: 16712284
- config_name: unshuffled_deduplicated_cbk
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 535
num_examples: 1
download_size: 247
dataset_size: 535
- config_name: unshuffled_deduplicated_da
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 10204168604
num_examples: 4771098
download_size: 3816376656
dataset_size: 10204168604
- config_name: unshuffled_deduplicated_dv
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 82122241
num_examples: 17024
download_size: 16836170
dataset_size: 82122241
- config_name: unshuffled_deduplicated_eo
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 239597935
num_examples: 84752
download_size: 92858714
dataset_size: 239597935
- config_name: unshuffled_deduplicated_fa
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 39986583410
num_examples: 8203495
download_size: 10459318520
dataset_size: 39986583410
- config_name: unshuffled_deduplicated_fy
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 26562554
num_examples: 20661
download_size: 10270434
dataset_size: 26562554
- config_name: unshuffled_deduplicated_gn
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 24545
num_examples: 68
download_size: 9566
dataset_size: 24545
- config_name: unshuffled_deduplicated_cs
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 25590158564
num_examples: 12308039
download_size: 10494256383
dataset_size: 25590158564
- config_name: unshuffled_deduplicated_hi
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 9550345517
num_examples: 1909387
download_size: 2007441283
dataset_size: 9550345517
- config_name: unshuffled_deduplicated_hu
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 19027456462
num_examples: 6582908
download_size: 7368098962
dataset_size: 19027456462
- config_name: unshuffled_deduplicated_ie
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1688
num_examples: 11
download_size: 649
dataset_size: 1688
- config_name: unshuffled_deduplicated_fr
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 147774253219
num_examples: 59448891
download_size: 55462770729
dataset_size: 147774253219
- config_name: unshuffled_deduplicated_gd
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1339050
num_examples: 3883
download_size: 420601
dataset_size: 1339050
- config_name: unshuffled_deduplicated_gu
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 758319353
num_examples: 169834
download_size: 162974870
dataset_size: 758319353
- config_name: unshuffled_deduplicated_hsb
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1821734
num_examples: 3084
download_size: 728158
dataset_size: 1821734
- config_name: unshuffled_deduplicated_ia
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 373710
num_examples: 529
download_size: 52722
dataset_size: 373710
- config_name: unshuffled_deduplicated_io
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 139493
num_examples: 617
download_size: 42813
dataset_size: 139493
- config_name: unshuffled_deduplicated_jbo
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 700428
num_examples: 617
download_size: 203506
dataset_size: 700428
- config_name: unshuffled_deduplicated_km
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 609886370
num_examples: 108346
download_size: 114480044
dataset_size: 609886370
- config_name: unshuffled_deduplicated_ku
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 62855449
num_examples: 29054
download_size: 23343869
dataset_size: 62855449
- config_name: unshuffled_deduplicated_la
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 8867995
num_examples: 18808
download_size: 3421499
dataset_size: 8867995
- config_name: unshuffled_deduplicated_lmo
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 458386
num_examples: 1374
download_size: 106048
dataset_size: 458386
- config_name: unshuffled_deduplicated_lv
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1895693807
num_examples: 843195
download_size: 710448932
dataset_size: 1895693807
- config_name: unshuffled_deduplicated_min
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 318749
num_examples: 166
download_size: 10233
dataset_size: 318749
- config_name: unshuffled_deduplicated_mr
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1487944837
num_examples: 212556
download_size: 299680349
dataset_size: 1487944837
- config_name: unshuffled_deduplicated_mwl
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1121
num_examples: 7
download_size: 797
dataset_size: 1121
- config_name: unshuffled_deduplicated_nah
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 11540
num_examples: 58
download_size: 2868
dataset_size: 11540
- config_name: unshuffled_deduplicated_new
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 4226557
num_examples: 2126
download_size: 830767
dataset_size: 4226557
- config_name: unshuffled_deduplicated_oc
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 3938772
num_examples: 6485
download_size: 1338194
dataset_size: 3938772
- config_name: unshuffled_deduplicated_pam
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 319
num_examples: 1
download_size: 366
dataset_size: 319
- config_name: unshuffled_deduplicated_ps
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 254360032
num_examples: 67921
download_size: 71823163
dataset_size: 254360032
- config_name: unshuffled_deduplicated_it
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 73843292670
num_examples: 28522082
download_size: 27931571784
dataset_size: 73843292670
- config_name: unshuffled_deduplicated_ka
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1982841952
num_examples: 372158
download_size: 377220437
dataset_size: 1982841952
- config_name: unshuffled_deduplicated_ro
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 11601264185
num_examples: 5044757
download_size: 4478423935
dataset_size: 11601264185
- config_name: unshuffled_deduplicated_scn
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2990
num_examples: 17
download_size: 1620
dataset_size: 2990
- config_name: unshuffled_deduplicated_ko
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 11956006533
num_examples: 3675420
download_size: 4462788278
dataset_size: 11956006533
- config_name: unshuffled_deduplicated_kw
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 14971
num_examples: 68
download_size: 6195
dataset_size: 14971
- config_name: unshuffled_deduplicated_lez
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 3075326
num_examples: 1381
download_size: 763936
dataset_size: 3075326
- config_name: unshuffled_deduplicated_lrc
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 65291
num_examples: 72
download_size: 16272
dataset_size: 65291
- config_name: unshuffled_deduplicated_mg
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 13516085
num_examples: 13343
download_size: 4303472
dataset_size: 13516085
- config_name: unshuffled_deduplicated_ml
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2685637627
num_examples: 453904
download_size: 496801596
dataset_size: 2685637627
- config_name: unshuffled_deduplicated_ms
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 45064684
num_examples: 183443
download_size: 16391407
dataset_size: 45064684
- config_name: unshuffled_deduplicated_myv
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1224
num_examples: 5
download_size: 705
dataset_size: 1224
- config_name: unshuffled_deduplicated_nds
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 13360483
num_examples: 8714
download_size: 5271194
dataset_size: 13360483
- config_name: unshuffled_deduplicated_nn
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 57286159
num_examples: 109118
download_size: 23583774
dataset_size: 57286159
- config_name: unshuffled_deduplicated_os
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 10962689
num_examples: 2559
download_size: 2829131
dataset_size: 10962689
- config_name: unshuffled_deduplicated_pms
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1996853
num_examples: 2859
download_size: 716837
dataset_size: 1996853
- config_name: unshuffled_deduplicated_qu
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 72587
num_examples: 411
download_size: 17501
dataset_size: 72587
- config_name: unshuffled_deduplicated_sa
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 38236039
num_examples: 7121
download_size: 7268337
dataset_size: 38236039
- config_name: unshuffled_deduplicated_sk
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 4768416160
num_examples: 2820821
download_size: 1960409934
dataset_size: 4768416160
- config_name: unshuffled_deduplicated_sh
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 6184582
num_examples: 17610
download_size: 1445894
dataset_size: 6184582
- config_name: unshuffled_deduplicated_so
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 16269
num_examples: 42
download_size: 2109
dataset_size: 16269
- config_name: unshuffled_deduplicated_sr
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2358255234
num_examples: 645747
download_size: 665025000
dataset_size: 2358255234
- config_name: unshuffled_deduplicated_ta
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 5477003981
num_examples: 833101
download_size: 971118176
dataset_size: 5477003981
- config_name: unshuffled_deduplicated_tk
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 7092199
num_examples: 4694
download_size: 2219582
dataset_size: 7092199
- config_name: unshuffled_deduplicated_tyv
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 8319
num_examples: 24
download_size: 2976
dataset_size: 8319
- config_name: unshuffled_deduplicated_uz
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 11834927
num_examples: 15074
download_size: 4300299
dataset_size: 11834927
- config_name: unshuffled_deduplicated_wa
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 214337
num_examples: 677
download_size: 79130
dataset_size: 214337
- config_name: unshuffled_deduplicated_xmf
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 4617445
num_examples: 2418
download_size: 943151
dataset_size: 4617445
- config_name: unshuffled_deduplicated_sv
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 26239415574
num_examples: 11014487
download_size: 10185393483
dataset_size: 26239415574
- config_name: unshuffled_deduplicated_tg
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 261233997
num_examples: 56259
download_size: 62908723
dataset_size: 261233997
- config_name: unshuffled_deduplicated_de
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 155723559907
num_examples: 62398034
download_size: 60797849113
dataset_size: 155723559907
- config_name: unshuffled_deduplicated_tr
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 28375018927
num_examples: 11596446
download_size: 10390754678
dataset_size: 28375018927
- config_name: unshuffled_deduplicated_el
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 28689398676
num_examples: 6521169
download_size: 7907952068
dataset_size: 28689398676
- config_name: unshuffled_deduplicated_uk
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 29791312367
num_examples: 7782375
download_size: 8037737457
dataset_size: 29791312367
- config_name: unshuffled_deduplicated_vi
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 33528331774
num_examples: 9897709
download_size: 10711506712
dataset_size: 33528331774
- config_name: unshuffled_deduplicated_wuu
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 33253
num_examples: 64
download_size: 7273
dataset_size: 33253
- config_name: unshuffled_deduplicated_yo
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 27169
num_examples: 49
download_size: 8925
dataset_size: 27169
- config_name: unshuffled_original_als
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 5297910
num_examples: 7324
download_size: 1489734
dataset_size: 5297910
- config_name: unshuffled_original_arz
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 70132423
num_examples: 158113
download_size: 15891255
dataset_size: 70132423
- config_name: unshuffled_original_az
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2964781192
num_examples: 912330
download_size: 927763846
dataset_size: 2964781192
- config_name: unshuffled_original_bcl
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 901
num_examples: 1
download_size: 581
dataset_size: 901
- config_name: unshuffled_original_bn
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 10771945233
num_examples: 1675515
download_size: 2139944099
dataset_size: 10771945233
- config_name: unshuffled_original_bs
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 482740
num_examples: 2143
download_size: 56419
dataset_size: 482740
- config_name: unshuffled_original_ce
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 8735740
num_examples: 4042
download_size: 2089184
dataset_size: 8735740
- config_name: unshuffled_original_cv
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 41047029
num_examples: 20281
download_size: 9400068
dataset_size: 41047029
- config_name: unshuffled_original_diq
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 162
num_examples: 1
download_size: 318
dataset_size: 162
- config_name: unshuffled_original_eml
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 26099
num_examples: 84
download_size: 10071
dataset_size: 26099
- config_name: unshuffled_original_et
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 5174800705
num_examples: 2093621
download_size: 1881328631
dataset_size: 5174800705
- config_name: unshuffled_deduplicated_zh
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 267614324325
num_examples: 41708901
download_size: 99982781539
dataset_size: 267614324325
- config_name: unshuffled_original_an
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1329433
num_examples: 2449
download_size: 148184
dataset_size: 1329433
- config_name: unshuffled_original_ast
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2539238
num_examples: 6999
download_size: 920730
dataset_size: 2539238
- config_name: unshuffled_original_ba
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 133704014
num_examples: 42551
download_size: 33215002
dataset_size: 133704014
- config_name: unshuffled_original_bg
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 33753811450
num_examples: 5869686
download_size: 8336964541
dataset_size: 33753811450
- config_name: unshuffled_original_bpy
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 4347467
num_examples: 6046
download_size: 336974
dataset_size: 4347467
- config_name: unshuffled_original_ca
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 8623251470
num_examples: 4390754
download_size: 3101954304
dataset_size: 8623251470
- config_name: unshuffled_original_ckb
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 510965919
num_examples: 103639
download_size: 111884006
dataset_size: 510965919
- config_name: unshuffled_deduplicated_es
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 160418075023
num_examples: 56326016
download_size: 60464970319
dataset_size: 160418075023
- config_name: unshuffled_original_da
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 16756455589
num_examples: 7664010
download_size: 6000579388
dataset_size: 16756455589
- config_name: unshuffled_original_dv
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 131628992
num_examples: 21018
download_size: 24914404
dataset_size: 131628992
- config_name: unshuffled_original_eo
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 314188336
num_examples: 121168
download_size: 117076019
dataset_size: 314188336
- config_name: unshuffled_deduplicated_fi
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 13945067515
num_examples: 5326443
download_size: 5380047103
dataset_size: 13945067515
- config_name: unshuffled_deduplicated_ga
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 63370688
num_examples: 46493
download_size: 22218633
dataset_size: 63370688
- config_name: unshuffled_deduplicated_gom
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1863089
num_examples: 484
download_size: 377051
dataset_size: 1863089
- config_name: unshuffled_deduplicated_hr
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 118047678
num_examples: 321484
download_size: 46731365
dataset_size: 118047678
- config_name: unshuffled_deduplicated_hy
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1559114836
num_examples: 396093
download_size: 393620208
dataset_size: 1559114836
- config_name: unshuffled_deduplicated_ilo
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 667896
num_examples: 1578
download_size: 230065
dataset_size: 667896
- config_name: unshuffled_original_fa
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 84209448803
num_examples: 13704702
download_size: 20956409096
dataset_size: 84209448803
- config_name: unshuffled_original_fy
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 36238452
num_examples: 33053
download_size: 12409774
dataset_size: 36238452
- config_name: unshuffled_original_gn
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 37427
num_examples: 106
download_size: 9761
dataset_size: 37427
- config_name: unshuffled_original_hi
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 17929286362
num_examples: 3264660
download_size: 3656636848
dataset_size: 17929286362
- config_name: unshuffled_original_hu
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 43074893842
num_examples: 11197780
download_size: 15693847091
dataset_size: 43074893842
- config_name: unshuffled_original_ie
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 25355
num_examples: 101
download_size: 783
dataset_size: 25355
- config_name: unshuffled_deduplicated_ja
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 113315056833
num_examples: 39496439
download_size: 40801218295
dataset_size: 113315056833
- config_name: unshuffled_deduplicated_kk
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1583064520
num_examples: 338073
download_size: 389111715
dataset_size: 1583064520
- config_name: unshuffled_deduplicated_krc
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2412731
num_examples: 1377
download_size: 615982
dataset_size: 2412731
- config_name: unshuffled_deduplicated_ky
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 407576051
num_examples: 86561
download_size: 106219565
dataset_size: 407576051
- config_name: unshuffled_deduplicated_li
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 28176
num_examples: 118
download_size: 11724
dataset_size: 28176
- config_name: unshuffled_deduplicated_lt
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 4185372402
num_examples: 1737411
download_size: 1653025558
dataset_size: 4185372402
- config_name: unshuffled_deduplicated_mhr
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 6247177
num_examples: 2515
download_size: 1622076
dataset_size: 6247177
- config_name: unshuffled_deduplicated_mn
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 880883961
num_examples: 197878
download_size: 219516471
dataset_size: 880883961
- config_name: unshuffled_deduplicated_mt
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 17539926
num_examples: 16383
download_size: 5898934
dataset_size: 17539926
- config_name: unshuffled_deduplicated_mzn
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 626534
num_examples: 917
download_size: 157541
dataset_size: 626534
- config_name: unshuffled_deduplicated_ne
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1239170286
num_examples: 219334
download_size: 240627361
dataset_size: 1239170286
- config_name: unshuffled_deduplicated_no
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 5077919278
num_examples: 3229940
download_size: 1960828800
dataset_size: 5077919278
- config_name: unshuffled_deduplicated_pa
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 482461302
num_examples: 87235
download_size: 102390579
dataset_size: 482461302
- config_name: unshuffled_deduplicated_pnb
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 9416915
num_examples: 3463
download_size: 2579976
dataset_size: 9416915
- config_name: unshuffled_deduplicated_rm
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 6932
num_examples: 34
download_size: 2679
dataset_size: 6932
- config_name: unshuffled_deduplicated_sah
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 27293316
num_examples: 8555
download_size: 7020207
dataset_size: 27293316
- config_name: unshuffled_deduplicated_si
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 841460012
num_examples: 120684
download_size: 175610997
dataset_size: 841460012
- config_name: unshuffled_deduplicated_sq
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1208425681
num_examples: 461598
download_size: 445358539
dataset_size: 1208425681
- config_name: unshuffled_deduplicated_sw
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 8747758
num_examples: 24803
download_size: 2946034
dataset_size: 8747758
- config_name: unshuffled_deduplicated_th
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 17082022564
num_examples: 3749826
download_size: 3536468931
dataset_size: 17082022564
- config_name: unshuffled_deduplicated_tt
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 320641922
num_examples: 82738
download_size: 85893621
dataset_size: 320641922
- config_name: unshuffled_deduplicated_ur
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1819253063
num_examples: 428674
download_size: 483593818
dataset_size: 1819253063
- config_name: unshuffled_deduplicated_vo
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2098461
num_examples: 3317
download_size: 301687
dataset_size: 2098461
- config_name: unshuffled_deduplicated_xal
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 114574
num_examples: 36
download_size: 31863
dataset_size: 114574
- config_name: unshuffled_deduplicated_yue
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2267
num_examples: 7
download_size: 646
dataset_size: 2267
- config_name: unshuffled_original_am
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 378060369
num_examples: 83663
download_size: 102789518
dataset_size: 378060369
- config_name: unshuffled_original_as
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 117733678
num_examples: 14985
download_size: 21437245
dataset_size: 117733678
- config_name: unshuffled_original_azb
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 28469069
num_examples: 15446
download_size: 6641415
dataset_size: 28469069
- config_name: unshuffled_original_be
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1877972506
num_examples: 586031
download_size: 498295673
dataset_size: 1877972506
- config_name: unshuffled_original_bo
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 195400209
num_examples: 26795
download_size: 28940995
dataset_size: 195400209
- config_name: unshuffled_original_bxr
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 13376
num_examples: 42
download_size: 3688
dataset_size: 13376
- config_name: unshuffled_original_ceb
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 40964537
num_examples: 56248
download_size: 11070392
dataset_size: 40964537
- config_name: unshuffled_original_cy
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 224933804
num_examples: 157698
download_size: 81736037
dataset_size: 224933804
- config_name: unshuffled_original_dsb
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 13761
num_examples: 65
download_size: 3753
dataset_size: 13761
- config_name: unshuffled_original_fr
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 303190338653
num_examples: 96742378
download_size: 105324330228
dataset_size: 303190338653
- config_name: unshuffled_original_gd
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2022000
num_examples: 5799
download_size: 525253
dataset_size: 2022000
- config_name: unshuffled_original_gu
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1094814909
num_examples: 240691
download_size: 232021129
dataset_size: 1094814909
- config_name: unshuffled_original_hsb
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 4482886
num_examples: 7959
download_size: 1389826
dataset_size: 4482886
- config_name: unshuffled_original_ia
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 689455
num_examples: 1040
download_size: 83325
dataset_size: 689455
- config_name: unshuffled_original_io
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 158808
num_examples: 694
download_size: 44548
dataset_size: 158808
- config_name: unshuffled_original_jbo
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 763027
num_examples: 832
download_size: 212962
dataset_size: 763027
- config_name: unshuffled_original_km
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1102616385
num_examples: 159363
download_size: 193286621
dataset_size: 1102616385
- config_name: unshuffled_original_ku
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 99062676
num_examples: 46535
download_size: 33376537
dataset_size: 99062676
- config_name: unshuffled_original_la
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 27801400
num_examples: 94588
download_size: 5458131
dataset_size: 27801400
- config_name: unshuffled_original_lmo
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 470001
num_examples: 1401
download_size: 109759
dataset_size: 470001
- config_name: unshuffled_original_lv
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 4266812625
num_examples: 1593820
download_size: 1486675302
dataset_size: 4266812625
- config_name: unshuffled_original_min
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 624991
num_examples: 220
download_size: 12379
dataset_size: 624991
- config_name: unshuffled_original_mr
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2816455519
num_examples: 326804
download_size: 525303459
dataset_size: 2816455519
- config_name: unshuffled_original_mwl
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1273
num_examples: 8
download_size: 789
dataset_size: 1273
- config_name: unshuffled_original_nah
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 12070
num_examples: 61
download_size: 2857
dataset_size: 12070
- config_name: unshuffled_original_new
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 5766053
num_examples: 4696
download_size: 1031042
dataset_size: 5766053
- config_name: unshuffled_original_oc
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 6127539
num_examples: 10709
download_size: 1574956
dataset_size: 6127539
- config_name: unshuffled_original_pam
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 800
num_examples: 3
download_size: 364
dataset_size: 800
- config_name: unshuffled_original_ps
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 379515973
num_examples: 98216
download_size: 103659691
dataset_size: 379515973
- config_name: unshuffled_original_ro
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 26869251055
num_examples: 9387265
download_size: 9534521905
dataset_size: 26869251055
- config_name: unshuffled_original_scn
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 3573
num_examples: 21
download_size: 1614
dataset_size: 3573
- config_name: unshuffled_original_sk
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 9808179461
num_examples: 5492194
download_size: 3708313186
dataset_size: 9808179461
- config_name: unshuffled_original_sr
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 4131922671
num_examples: 1013619
download_size: 1081129678
dataset_size: 4131922671
- config_name: unshuffled_original_ta
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 9934879441
num_examples: 1263280
download_size: 1737252172
dataset_size: 9934879441
- config_name: unshuffled_original_tk
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 10662991
num_examples: 6456
download_size: 2956150
dataset_size: 10662991
- config_name: unshuffled_original_tyv
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 12219
num_examples: 34
download_size: 3034
dataset_size: 12219
- config_name: unshuffled_original_uz
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 21464779
num_examples: 27537
download_size: 5775644
dataset_size: 21464779
- config_name: unshuffled_original_wa
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 291400
num_examples: 1001
download_size: 89942
dataset_size: 291400
- config_name: unshuffled_original_xmf
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 6120123
num_examples: 3783
download_size: 1048265
dataset_size: 6120123
- config_name: unshuffled_original_it
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 147378116499
num_examples: 46981781
download_size: 52157691650
dataset_size: 147378116499
- config_name: unshuffled_original_ka
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 3768832240
num_examples: 563916
download_size: 680732710
dataset_size: 3768832240
- config_name: unshuffled_original_ko
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 25292102197
num_examples: 7345075
download_size: 8807937093
dataset_size: 25292102197
- config_name: unshuffled_original_kw
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 47016
num_examples: 203
download_size: 6715
dataset_size: 47016
- config_name: unshuffled_original_lez
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 3378104
num_examples: 1485
download_size: 825648
dataset_size: 3378104
- config_name: unshuffled_original_lrc
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 78347
num_examples: 88
download_size: 16573
dataset_size: 78347
- config_name: unshuffled_original_mg
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 21789998
num_examples: 17957
download_size: 6213316
dataset_size: 21789998
- config_name: unshuffled_original_ml
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 5244279375
num_examples: 603937
download_size: 938681749
dataset_size: 5244279375
- config_name: unshuffled_original_ms
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 122326270
num_examples: 534016
download_size: 28458804
dataset_size: 122326270
- config_name: unshuffled_original_myv
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1436
num_examples: 6
download_size: 691
dataset_size: 1436
- config_name: unshuffled_original_nds
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 18238189
num_examples: 18174
download_size: 6744705
dataset_size: 18238189
- config_name: unshuffled_original_nn
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 90838777
num_examples: 185884
download_size: 32863375
dataset_size: 90838777
- config_name: unshuffled_original_os
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 12893477
num_examples: 5213
download_size: 3096133
dataset_size: 12893477
- config_name: unshuffled_original_pms
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2154710
num_examples: 3225
download_size: 756400
dataset_size: 2154710
- config_name: unshuffled_original_qu
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 85032
num_examples: 452
download_size: 17931
dataset_size: 85032
- config_name: unshuffled_original_sa
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 97055224
num_examples: 14291
download_size: 17517475
dataset_size: 97055224
- config_name: unshuffled_original_sh
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 25841505
num_examples: 36700
download_size: 3457359
dataset_size: 25841505
- config_name: unshuffled_original_so
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 63785
num_examples: 156
download_size: 2478
dataset_size: 63785
- config_name: unshuffled_original_sv
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 47000933560
num_examples: 17395625
download_size: 17182697021
dataset_size: 47000933560
- config_name: unshuffled_original_tg
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 397436494
num_examples: 89002
download_size: 90972727
dataset_size: 397436494
- config_name: unshuffled_original_tr
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 63581153419
num_examples: 18535253
download_size: 21961561999
dataset_size: 63581153419
- config_name: unshuffled_original_uk
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 56439494556
num_examples: 12973467
download_size: 14419203733
dataset_size: 56439494556
- config_name: unshuffled_original_vi
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 72226388484
num_examples: 14898250
download_size: 21503594095
dataset_size: 72226388484
- config_name: unshuffled_original_wuu
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 114041
num_examples: 214
download_size: 8780
dataset_size: 114041
- config_name: unshuffled_original_yo
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 58546
num_examples: 214
download_size: 9550
dataset_size: 58546
- config_name: unshuffled_original_zh
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 545607539477
num_examples: 60137667
download_size: 206003993405
dataset_size: 545607539477
- config_name: unshuffled_deduplicated_en
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1297616499791
num_examples: 304230423
download_size: 496496144465
dataset_size: 1297616499791
- config_name: unshuffled_deduplicated_eu
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 360674267
num_examples: 256513
download_size: 134683484
dataset_size: 360674267
- config_name: unshuffled_deduplicated_frr
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 4500
num_examples: 7
download_size: 540
dataset_size: 4500
- config_name: unshuffled_deduplicated_gl
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 404922022
num_examples: 284320
download_size: 155851883
dataset_size: 404922022
- config_name: unshuffled_deduplicated_he
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 10451408409
num_examples: 2375030
download_size: 3043383695
dataset_size: 10451408409
- config_name: unshuffled_deduplicated_ht
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 3439
num_examples: 9
download_size: 594
dataset_size: 3439
- config_name: unshuffled_deduplicated_id
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 16964948727
num_examples: 9948521
download_size: 5995510660
dataset_size: 16964948727
- config_name: unshuffled_deduplicated_is
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 891047926
num_examples: 389515
download_size: 332871764
dataset_size: 891047926
- config_name: unshuffled_deduplicated_jv
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 609713
num_examples: 1163
download_size: 208165
dataset_size: 609713
- config_name: unshuffled_deduplicated_kn
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1080985653
num_examples: 251064
download_size: 215526836
dataset_size: 1080985653
- config_name: unshuffled_deduplicated_kv
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1200609
num_examples: 924
download_size: 327479
dataset_size: 1200609
- config_name: unshuffled_deduplicated_lb
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 21242773
num_examples: 21735
download_size: 8300328
dataset_size: 21242773
- config_name: unshuffled_deduplicated_lo
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 119015146
num_examples: 32652
download_size: 23634237
dataset_size: 119015146
- config_name: unshuffled_deduplicated_mai
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 10721
num_examples: 25
download_size: 2267
dataset_size: 10721
- config_name: unshuffled_deduplicated_mk
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1186605123
num_examples: 299457
download_size: 303118518
dataset_size: 1186605123
- config_name: unshuffled_deduplicated_mrj
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1096428
num_examples: 669
download_size: 289048
dataset_size: 1096428
- config_name: unshuffled_deduplicated_my
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1112006614
num_examples: 136639
download_size: 207136614
dataset_size: 1112006614
- config_name: unshuffled_deduplicated_nap
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 13782
num_examples: 55
download_size: 4965
dataset_size: 13782
- config_name: unshuffled_deduplicated_nl
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 41726089054
num_examples: 20812149
download_size: 15734167112
dataset_size: 41726089054
- config_name: unshuffled_deduplicated_or
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 197401878
num_examples: 44230
download_size: 38726721
dataset_size: 197401878
- config_name: unshuffled_deduplicated_pl
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 50387595763
num_examples: 20682611
download_size: 20189161328
dataset_size: 50387595763
- config_name: unshuffled_deduplicated_pt
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 68162434231
num_examples: 26920397
download_size: 25997795946
dataset_size: 68162434231
- config_name: unshuffled_deduplicated_ru
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 611031071327
num_examples: 115954598
download_size: 166677136024
dataset_size: 611031071327
- config_name: unshuffled_deduplicated_sd
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 275327037
num_examples: 33925
download_size: 74169753
dataset_size: 275327037
- config_name: unshuffled_deduplicated_sl
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1311219223
num_examples: 886223
download_size: 523218283
dataset_size: 1311219223
- config_name: unshuffled_deduplicated_su
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 149921
num_examples: 511
download_size: 53164
dataset_size: 149921
- config_name: unshuffled_deduplicated_te
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1694004428
num_examples: 312644
download_size: 342429224
dataset_size: 1694004428
- config_name: unshuffled_deduplicated_tl
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 429427446
num_examples: 294132
download_size: 151342433
dataset_size: 429427446
- config_name: unshuffled_deduplicated_ug
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 86344782
num_examples: 15503
download_size: 20527752
dataset_size: 86344782
- config_name: unshuffled_deduplicated_vec
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 17303
num_examples: 64
download_size: 7647
dataset_size: 17303
- config_name: unshuffled_deduplicated_war
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2338532
num_examples: 9161
download_size: 546586
dataset_size: 2338532
- config_name: unshuffled_deduplicated_yi
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 87935052
num_examples: 32919
download_size: 22197718
dataset_size: 87935052
- config_name: unshuffled_original_af
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 254076274
num_examples: 201117
download_size: 85795254
dataset_size: 254076274
- config_name: unshuffled_original_ar
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 87935768938
num_examples: 16365602
download_size: 22232546836
dataset_size: 87935768938
- config_name: unshuffled_original_av
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 423603
num_examples: 456
download_size: 84767
dataset_size: 423603
- config_name: unshuffled_original_bar
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 555
num_examples: 4
download_size: 341
dataset_size: 555
- config_name: unshuffled_original_bh
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 116514
num_examples: 336
download_size: 7615
dataset_size: 116514
- config_name: unshuffled_original_br
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 30203875
num_examples: 37085
download_size: 9178158
dataset_size: 30203875
- config_name: unshuffled_original_cbk
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 536
num_examples: 1
download_size: 234
dataset_size: 536
- config_name: unshuffled_original_cs
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 57080142860
num_examples: 21001388
download_size: 21716697253
dataset_size: 57080142860
- config_name: unshuffled_original_de
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 331224484023
num_examples: 104913504
download_size: 119506267566
dataset_size: 331224484023
- config_name: unshuffled_original_el
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 66273231642
num_examples: 10425596
download_size: 17309601342
dataset_size: 66273231642
- config_name: unshuffled_original_es
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 298492270636
num_examples: 88199221
download_size: 106039137656
dataset_size: 298492270636
- config_name: unshuffled_original_fi
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 28571419204
num_examples: 8557453
download_size: 9970837279
dataset_size: 28571419204
- config_name: unshuffled_original_ga
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 92369035
num_examples: 83223
download_size: 29262282
dataset_size: 92369035
- config_name: unshuffled_original_gom
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2257169
num_examples: 640
download_size: 442950
dataset_size: 2257169
- config_name: unshuffled_original_hr
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 243829069
num_examples: 582219
download_size: 79417804
dataset_size: 243829069
- config_name: unshuffled_original_hy
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 3939672772
num_examples: 659430
download_size: 897364024
dataset_size: 3939672772
- config_name: unshuffled_original_ilo
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 925809
num_examples: 2638
download_size: 267451
dataset_size: 925809
- config_name: unshuffled_original_ja
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 232216718556
num_examples: 62721527
download_size: 79564645083
dataset_size: 232216718556
- config_name: unshuffled_original_kk
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2833778199
num_examples: 524591
download_size: 615067761
dataset_size: 2833778199
- config_name: unshuffled_original_krc
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2688672
num_examples: 1581
download_size: 656496
dataset_size: 2688672
- config_name: unshuffled_original_ky
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 630794622
num_examples: 146993
download_size: 152636608
dataset_size: 630794622
- config_name: unshuffled_original_li
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 31312
num_examples: 137
download_size: 11793
dataset_size: 31312
- config_name: unshuffled_original_lt
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 9445278312
num_examples: 2977757
download_size: 3439789726
dataset_size: 9445278312
- config_name: unshuffled_original_mhr
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 7553453
num_examples: 3212
download_size: 1834912
dataset_size: 7553453
- config_name: unshuffled_original_mn
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2332897881
num_examples: 395605
download_size: 472357548
dataset_size: 2332897881
- config_name: unshuffled_original_mt
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 24470330
num_examples: 26598
download_size: 7533204
dataset_size: 24470330
- config_name: unshuffled_original_mzn
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 720229
num_examples: 1055
download_size: 177817
dataset_size: 720229
- config_name: unshuffled_original_ne
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1866852959
num_examples: 299938
download_size: 355291639
dataset_size: 1866852959
- config_name: unshuffled_original_no
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 8652054976
num_examples: 5546211
download_size: 3106155643
dataset_size: 8652054976
- config_name: unshuffled_original_pa
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 801167879
num_examples: 127467
download_size: 164207256
dataset_size: 801167879
- config_name: unshuffled_original_pnb
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 12039418
num_examples: 4599
download_size: 3215579
dataset_size: 12039418
- config_name: unshuffled_original_rm
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 8027
num_examples: 41
download_size: 2691
dataset_size: 8027
- config_name: unshuffled_original_sah
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 43817239
num_examples: 22301
download_size: 9079982
dataset_size: 43817239
- config_name: unshuffled_original_si
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1469374795
num_examples: 203082
download_size: 310935021
dataset_size: 1469374795
- config_name: unshuffled_original_sq
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2440834375
num_examples: 672077
download_size: 861831806
dataset_size: 2440834375
- config_name: unshuffled_original_sw
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 14073775
num_examples: 41986
download_size: 3712739
dataset_size: 14073775
- config_name: unshuffled_original_th
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 38289228753
num_examples: 6064129
download_size: 7377469078
dataset_size: 38289228753
- config_name: unshuffled_original_tt
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 703412782
num_examples: 135923
download_size: 151056507
dataset_size: 703412782
- config_name: unshuffled_original_ur
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2802270961
num_examples: 638596
download_size: 712607161
dataset_size: 2802270961
- config_name: unshuffled_original_vo
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2118909
num_examples: 3366
download_size: 307184
dataset_size: 2118909
- config_name: unshuffled_original_xal
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 116043
num_examples: 39
download_size: 32117
dataset_size: 116043
- config_name: unshuffled_original_yue
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 3899
num_examples: 11
download_size: 647
dataset_size: 3899
- config_name: unshuffled_original_en
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2525437912097
num_examples: 455994980
download_size: 903830686146
dataset_size: 2525437912097
- config_name: unshuffled_original_eu
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 894836188
num_examples: 506883
download_size: 248190119
dataset_size: 894836188
- config_name: unshuffled_original_frr
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 4507
num_examples: 7
download_size: 527
dataset_size: 4507
- config_name: unshuffled_original_gl
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 656477422
num_examples: 544388
download_size: 235384299
dataset_size: 656477422
- config_name: unshuffled_original_he
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 21113706929
num_examples: 3808397
download_size: 5660026441
dataset_size: 21113706929
- config_name: unshuffled_original_ht
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 4083
num_examples: 13
download_size: 590
dataset_size: 4083
- config_name: unshuffled_original_id
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 32317679452
num_examples: 16236463
download_size: 10596988488
dataset_size: 32317679452
- config_name: unshuffled_original_is
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1524936467
num_examples: 625673
download_size: 533034495
dataset_size: 1524936467
- config_name: unshuffled_original_jv
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 691812
num_examples: 1445
download_size: 219246
dataset_size: 691812
- config_name: unshuffled_original_kn
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1763625096
num_examples: 350363
download_size: 342155433
dataset_size: 1763625096
- config_name: unshuffled_original_kv
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2379758
num_examples: 1549
download_size: 400725
dataset_size: 2379758
- config_name: unshuffled_original_lb
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 30595156
num_examples: 34807
download_size: 10725552
dataset_size: 30595156
- config_name: unshuffled_original_lo
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 182361509
num_examples: 52910
download_size: 33916738
dataset_size: 182361509
- config_name: unshuffled_original_mai
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 325990
num_examples: 123
download_size: 5563
dataset_size: 325990
- config_name: unshuffled_original_mk
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2202480390
num_examples: 437871
download_size: 508239918
dataset_size: 2202480390
- config_name: unshuffled_original_mrj
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1165977
num_examples: 757
download_size: 303447
dataset_size: 1165977
- config_name: unshuffled_original_my
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2021872493
num_examples: 232329
download_size: 369850157
dataset_size: 2021872493
- config_name: unshuffled_original_nap
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 17839
num_examples: 73
download_size: 5023
dataset_size: 17839
- config_name: unshuffled_original_nl
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 83230965323
num_examples: 34682142
download_size: 29352811750
dataset_size: 83230965323
- config_name: unshuffled_original_or
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 260151226
num_examples: 59463
download_size: 49834443
dataset_size: 260151226
- config_name: unshuffled_original_pl
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 117121370605
num_examples: 35440972
download_size: 42884898947
dataset_size: 117121370605
- config_name: unshuffled_original_pt
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 132635490139
num_examples: 42114520
download_size: 47257949300
dataset_size: 132635490139
- config_name: unshuffled_original_ru
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1241627166551
num_examples: 161836003
download_size: 319755378587
dataset_size: 1241627166551
- config_name: unshuffled_original_sd
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 364256869
num_examples: 44280
download_size: 90621520
dataset_size: 364256869
- config_name: unshuffled_original_sl
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2675665926
num_examples: 1746604
download_size: 956197026
dataset_size: 2675665926
- config_name: unshuffled_original_su
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 225627
num_examples: 805
download_size: 59643
dataset_size: 225627
- config_name: unshuffled_original_te
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2611548765
num_examples: 475703
download_size: 522470115
dataset_size: 2611548765
- config_name: unshuffled_original_tl
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 606295665
num_examples: 458206
download_size: 204895159
dataset_size: 606295665
- config_name: unshuffled_original_ug
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 127419368
num_examples: 22255
download_size: 27923925
dataset_size: 127419368
- config_name: unshuffled_original_vec
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 19182
num_examples: 73
download_size: 7672
dataset_size: 19182
- config_name: unshuffled_original_war
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2682430
num_examples: 9760
download_size: 644576
dataset_size: 2682430
- config_name: unshuffled_original_yi
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 147601654
num_examples: 59364
download_size: 33337157
dataset_size: 147601654
---
# Dataset Card for "oscar"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://oscar-corpus.com](https://oscar-corpus.com)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
OSCAR or **O**pen **S**uper-large **C**rawled [**A**LMAnaCH](https://team.inria.fr/almanach/) co**R**pus is a huge multilingual corpus obtained by language classification and filtering of the [Common Crawl](https://commoncrawl.org/) corpus using the [goclassy](https://github.com/pjox/goclassy) architecture. Data is distributed by language in both original and deduplicated form.
The version here is the original OSCAR 2019 release: https://oscar-project.org/post/oscar-2019/
For more recent versions, visit the [oscar-corpus](https://huggingface.co/oscar-corpus) organization on the Hub:
- OSCAR 22.01 (released in January 2022): [oscar-corpus/OSCAR-2201](https://huggingface.co/datasets/oscar-corpus/OSCAR-2201)
- OSCAR 21.09 (released in September 2021): [oscar-corpus/OSCAR-2109](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109)
### Supported Tasks and Leaderboards
OSCAR is mainly inteded to pretrain language models and word represantations.
### Languages
All the data is distributed by language, both the original and the deduplicated versions of the data are available. 166 different languages are available. The table in subsection [Data Splits Sample Size](#data-splits-sample-size) provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR.
## Dataset Structure
We show detailed information for all the configurations of the dataset.
### Data Instances
<details>
<summary>Click to expand the Data/size information for each language (deduplicated)</summary>
#### unshuffled_deduplicated_af
- **Size of downloaded dataset files:** 65.99 MB
- **Size of the generated dataset:** 172.30 MB
- **Total amount of disk used:** 238.29 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "aanlyn markte as gevolg van ons voortgesette 'n begrip opsie handel sakeplan pdf terwyl ons steeds die gereelde ons binêre opsies handel"
}
```
#### unshuffled_deduplicated_als
- **Size of downloaded dataset files:** 1.26 MB
- **Size of the generated dataset:** 2.96 MB
- **Total amount of disk used:** 4.22 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"De Nazionalpark hät e Flächi vo 170,3 km² und isch dodemit s grösti Naturschutzgebiet vo de Schwiz. Er ligt uf em Gebiet vo de ..."
}
```
#### unshuffled_deduplicated_am
- **Size of downloaded dataset files:** 61.35 MB
- **Size of the generated dataset:** 216.15 MB
- **Total amount of disk used:** 277.50 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"አየር መንገዱ ከአዲስ አበባ ወደ ሮም ጣሊያን በማምራት ላይ በነበረበት ጊዜ ረዳት አብራሪው የጉዞውን አቅጣጫ በመቀየር ጄኔቭ አውሮፓላን ማረፊያ በማሳረፍ እጁን ለፖሊስ ሰጥቷል።\\nየኢትዮጵያ መንግስት የ..."
}
```
#### unshuffled_deduplicated_an
- **Size of downloaded dataset files:** 0.14 MB
- **Size of the generated dataset:** 0.85 MB
- **Total amount of disk used:** 0.99 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"واااااااأسفاه الأمم تفتخر ب 0 أمي ووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووو..."
}
```
#### unshuffled_deduplicated_ar
- **Size of downloaded dataset files:** 9.67 GB
- **Size of the generated dataset:** 33.57 GB
- **Total amount of disk used:** 43.23 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"مرحبا بك عزيز الزائر نتمنى لك أوقاتاً سعيدة معنا وأن نزداد شرفا بخدمتك ولا تنسى التسجيل معنا لتستفيد بكل جديد\\nأهلا وسهلا بك زا..."
}
```
#### unshuffled_deduplicated_arz
- **Size of downloaded dataset files:** 10.02 MB
- **Size of the generated dataset:** 35.91 MB
- **Total amount of disk used:** 45.94 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"بنى عجل : قبيلة من عجل بن لجيم بن صعب بن على بن بكر بن وائل انتقل اغلبهم الى البصرة فى العراق و اصفهان و خراسان فى ايران و اذرب..."
}
```
#### unshuffled_deduplicated_as
- **Size of downloaded dataset files:** 15.51 MB
- **Size of the generated dataset:** 74.07 MB
- **Total amount of disk used:** 89.58 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"আমি, এই সংগঠনৰ সদস্য সকলে একেলগ হৈ অসমকে ধৰি ভাৰতৰ উত্তৰ পূৰ্বাঞ্চলৰ অমূল্য কলা-সাংস্কৃতিক সম্পদৰাজি বৃহত্তৰ অষ্ট্ৰেলিয়াৰ সন্মু..."
}
```
#### unshuffled_deduplicated_ast
- **Size of downloaded dataset files:** 0.86 MB
- **Size of the generated dataset:** 2.17 MB
- **Total amount of disk used:** 3.03 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"The Killers llanzaron el so álbum debú, Hot Fuss, en xunu de 2004 nel Reinu Xuníu, al traviés de la discográfica Lizard King, y..."
}
```
#### unshuffled_deduplicated_av
- **Size of downloaded dataset files:** 0.07 MB
- **Size of the generated dataset:** 0.34 MB
- **Total amount of disk used:** 0.41 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Жинда малъараб ва божизе бегьулеб рагІудаса кьуризе бегьуларо гьев. Гьес насихІат гьабизе кколелъул бацІцІадаб диналъул рахъалъ..."
}
```
#### unshuffled_deduplicated_az
- **Size of downloaded dataset files:** 521.74 MB
- **Size of the generated dataset:** 1.53 GB
- **Total amount of disk used:** 2.05 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"AZTV-Artıq 7 ildir ki, Abşeron rayonu dotasiya almadan bütün xərclərini yerli daxilolmalar hesabına maliyyələşdirir.\\nDünən, 10..."
}
```
#### unshuffled_deduplicated_azb
- **Size of downloaded dataset files:** 5.19 MB
- **Size of the generated dataset:** 20.08 MB
- **Total amount of disk used:** 25.27 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"لعلی ١٣-جو عصرده یاشاییب یاراتمیش گؤرکملی آذربایجان شاعرلریندندیر. ١٢٢٤-جی ایلده تبریزده آنادان اولموشدور، گنج یاشلاریندا تیجار..."
}
```
#### unshuffled_deduplicated_ba
- **Size of downloaded dataset files:** 25.98 MB
- **Size of the generated dataset:** 93.84 MB
- **Total amount of disk used:** 119.82 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Күҙәтеү ҡуласаһы моделен хәҙер Мифтахетдин Аҡмулла исемендәге Башҡорт дәүләт педагогия университетында ла эшләргә мөмкин\\t\\nКүҙ..."
}
```
#### unshuffled_deduplicated_bar
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": " vo"
}
```
#### unshuffled_deduplicated_bcl
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"& ÿ ó / í 0 - ø û ù ö ú ð ï ú \\u0014 ù þ ô ö í ÷ ò \\u0014 ÷ í ù û ö í \\u0001 û ñ ç þ \\u0001 ð \\u0007 þ ò ñ ñ ò ô \\u0017 û ö ô ÷..."
}
```
#### unshuffled_deduplicated_be
- **Size of downloaded dataset files:** 306.70 MB
- **Size of the generated dataset:** 1.08 GB
- **Total amount of disk used:** 1.39 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Брэсцкія ўлады не дазволілі прафсаюзу РЭП правесці пікетаванне ў парку Воінаў-інтэрнацыяналістаў 30 мая 2018 года.\\nСітуацыю пр..."
}
```
#### unshuffled_deduplicated_bg
- **Size of downloaded dataset files:** 3.85 GB
- **Size of the generated dataset:** 14.45 GB
- **Total amount of disk used:** 18.30 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ЖАЛБОПОДАТЕЛЯТ директор на Дирекция „ Обжалване и данъчно-осигурителна практика“- Бургас, редовно призован, се представлява от ..."
}
```
#### unshuffled_deduplicated_bh
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.04 MB
- **Total amount of disk used:** 0.04 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"सुकमा जिला भारत के छत्तीसगढ़ राज्य में एगो जिला बाटे। एकर मुख्यालय सुकमा शहर बाटे। एकर कुल रकबा 5636 वर्ग कि॰मी॰ बाटे।\"..."
}
```
#### unshuffled_deduplicated_bn
- **Size of downloaded dataset files:** 1.26 GB
- **Size of the generated dataset:** 6.24 GB
- **Total amount of disk used:** 7.50 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ভড়ং সর্বস্ব বাংলা আর্ট অ্যান্ড কালচারের হিসাব গুলিয়ে দেওয়ার ম্যাজিকের নাম ব্রাত্য রাইসু November 23, 2017\\nTagged with ডায়োজিনি..."
}
```
#### unshuffled_deduplicated_bo
- **Size of downloaded dataset files:** 22.37 MB
- **Size of the generated dataset:** 144.65 MB
- **Total amount of disk used:** 167.02 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"བོད་མི་འདི་དག་ནི་རང་རྒྱུད་སྒོ་རུ་ཕུད་དེ་གཞན་རྒྱུད་པང་དུ་ཉར་ནས་གསོ་སྐྱོང་བྱེད་དགོས་ཟེར་བ་དང་གཅིག་མཚུངས་རེད།\\nཚན་རིག་ནི་དང་ཐོག་རང..."
}
```
#### unshuffled_deduplicated_bpy
- **Size of downloaded dataset files:** 0.19 MB
- **Size of the generated dataset:** 1.78 MB
- **Total amount of disk used:** 1.97 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"পৌরসভা এহার আয়তন (লয়াহান) ২,৭৩০,.৬৩ বর্গ কিলোমিটার। পৌরসভা এহার মাপাহানর অক্ষাংশ বারো দ্রাঘিমাংশ ইলতাই 18.63° S 48.18° W ।[১]..."
}
```
#### unshuffled_deduplicated_br
- **Size of downloaded dataset files:** 6.47 MB
- **Size of the generated dataset:** 17.00 MB
- **Total amount of disk used:** 23.47 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Ar mank Magalhães(Daveoù a vank) a zo ur spesad evned, Spheniscus magellanicus an anv skiantel anezhañ.\\nGallout a reer implijo..."
}
```
#### unshuffled_deduplicated_bs
- **Size of downloaded dataset files:** 0.04 MB
- **Size of the generated dataset:** 0.15 MB
- **Total amount of disk used:** 0.18 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ž šř é ú šř šř ě šř ž é č ě ž ů ě ď éé ýš ě ě Ž č š ý ě ď é ýš ě ď ě éé ýš ě č ž ě š ý ď ě ýš é ú č ž č š ý ď ý ž é éě ď é č ýš..."
}
```
#### unshuffled_deduplicated_bxr
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.01 MB
- **Total amount of disk used:** 0.01 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"2002 оной хабар буряад хэлэ бэшэгэй һалбари Үндэһэтэнэй хүмүүнлиг ухаанай дээдэ һургуули болгогдожо өөршэлэгдөө.\\nХарин мүнөө б..."
}
```
#### unshuffled_deduplicated_ca
- **Size of downloaded dataset files:** 1.73 GB
- **Size of the generated dataset:** 4.57 GB
- **Total amount of disk used:** 6.30 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Daniel Vendrell, conegut com Vandrell, ha sigut un dels il•lustradors contemporanis més influents, representant a la nova onada..."
}
```
#### unshuffled_deduplicated_cbk
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano..."
}
```
#### unshuffled_deduplicated_ce
- **Size of downloaded dataset files:** 1.87 MB
- **Size of the generated dataset:** 7.04 MB
- **Total amount of disk used:** 8.90 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Шаьш анархисташ ду бохучу жигархойн дIахьедарехь дуьйцу, оьрсийн ницкъаллийн структурийн а, федералан каналан а Iалашонаш \\\"мар..."
}
```
#### unshuffled_deduplicated_ceb
- **Size of downloaded dataset files:** 7.12 MB
- **Size of the generated dataset:** 24.83 MB
- **Total amount of disk used:** 31.95 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Si Isko walay pupamilok nga nagtan-aw sa unahan, natugaw. “Naunsa ka gud diha Isko nga layo man kaayo ang imong panan-aw?” ni I..."
}
```
#### unshuffled_deduplicated_ckb
- **Size of downloaded dataset files:** 60.32 MB
- **Size of the generated dataset:** 237.72 MB
- **Total amount of disk used:** 298.05 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"رسی رۆژ - ساڵێک دوای بومەلەرزەی کرماشان میوانی بەرنامە : کاک سیاوەش حەیاتی چالاکی مەدەنی -قەسری شیرین\\nپارچە موزیک 30 / 10 / 20..."
}
```
#### unshuffled_deduplicated_cs
- **Size of downloaded dataset files:** 10.49 GB
- **Size of the generated dataset:** 25.71 GB
- **Total amount of disk used:** 36.20 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Akce anarchistů proti připravovanému novému služební řádu a nízkým mzdám 1903 – Historie českého anarchismu (1880 – 1939)\\nRost..."
}
```
#### unshuffled_deduplicated_cv
- **Size of downloaded dataset files:** 7.47 MB
- **Size of the generated dataset:** 27.49 MB
- **Total amount of disk used:** 34.95 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Шыранӑ чухне ӑнсӑртран латин кирилл саспаллисем вырӑнне латин саспаллисене ҫырсан, сайт эсир ҫырнине юсама тӑрӑшӗ.\\nКу сайтра ч..."
}
```
#### unshuffled_deduplicated_cy
- **Size of downloaded dataset files:** 53.63 MB
- **Size of the generated dataset:** 141.22 MB
- **Total amount of disk used:** 194.86 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Mae capeli Cymreig yr Andes ym Mhatagonia wedi cyhoeddi na fydd gwasanaethau yno weddill y mis, oherwydd yr eira trwm sydd wedi..."
}
```
#### unshuffled_deduplicated_da
- **Size of downloaded dataset files:** 3.82 GB
- **Size of the generated dataset:** 10.24 GB
- **Total amount of disk used:** 14.06 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Den 2.-5. februar 2016 løb det tredje kursus i uddannelsen af 4kommunesamarbejdets Local Impact Coaches, af stablen i Gentofte ..."
}
```
#### unshuffled_deduplicated_de
- **Size of downloaded dataset files:** 60.80 GB
- **Size of the generated dataset:** 156.30 GB
- **Total amount of disk used:** 217.10 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Auf dieser Seite gibt es mind. ein YouTube Video. Cookies für diese Website wurden abgelehnt. Dadurch können keine YouTube Vide..."
}
```
#### unshuffled_deduplicated_diq
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "Zıwanê Slawki, zıwano merdumanê Slawano. Zıwanê Slawki yew lızgeyê Zıwananê Hind u Ewropao. Keyeyê Zıwananê Slawki beno hirê letey:"
}
```
#### unshuffled_deduplicated_dsb
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.01 MB
- **Total amount of disk used:** 0.01 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Pśiklaskaju južo pśed pśedstajenim... 1500 źiśi njamóžo wěcej docakaś, měsćańska hala w Chóśebuzu - wupśedana."
}
```
#### unshuffled_deduplicated_dv
- **Size of downloaded dataset files:** 16.84 MB
- **Size of the generated dataset:** 82.19 MB
- **Total amount of disk used:** 99.03 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ބ. އަތޮޅުގައި ހުޅުވަން ތައްޔާރުވަމުން އަންނަ ވައްކަރު ރިސޯޓުގައި ވަޒީފާ އަދާކުރަން ޝައުގުވެރިވާ ފަރާތްތަކަށް ކުރިމަތިލުމުގެ ފުރ..."
}
```
#### unshuffled_deduplicated_el
- **Size of downloaded dataset files:** 7.91 GB
- **Size of the generated dataset:** 28.74 GB
- **Total amount of disk used:** 36.65 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Νεκρός εντοπίστηκε μέσα στο σπίτι του στην οδό Ηρώδου Αττικού στον αριθμό 7 ο επικεφαλής του προξενικού τμήματος της Ρωσικής πρ..."
}
```
#### unshuffled_deduplicated_eml
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.02 MB
- **Total amount of disk used:** 0.03 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"A séguit dal prucès ad rubutiśasiòṅ di abitànt dal pòpul ad Mikenes, Angoras 'l è finî dènt'r a 'n robot cun la tèsta dna rana ..."
}
```
#### unshuffled_deduplicated_en
- **Size of downloaded dataset files:** 496.50 GB
- **Size of the generated dataset:** 1299.75 GB
- **Total amount of disk used:** 1796.24 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Mtendere Village was inspired by the vision of Chief Napoleon Dzombe, which he shared with John Blanchard during his first visi..."
}
```
#### unshuffled_deduplicated_eo
- **Size of downloaded dataset files:** 92.86 MB
- **Size of the generated dataset:** 240.12 MB
- **Total amount of disk used:** 332.99 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Ĉu ... preĝi | mediti | ricevi instigojn || kanti | muziki || informiĝi | legi | studi || prepari Diservon\\nTemas pri kolekto d..."
}
```
#### unshuffled_deduplicated_es
- **Size of downloaded dataset files:** 60.46 GB
- **Size of the generated dataset:** 160.86 GB
- **Total amount of disk used:** 221.32 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Como se librará de la celulitis en el gimnasio La piel superflua en las manos después del adelgazamiento, Los bailes fáciles pa..."
}
```
#### unshuffled_deduplicated_et
- **Size of downloaded dataset files:** 966.79 MB
- **Size of the generated dataset:** 2.45 GB
- **Total amount of disk used:** 3.41 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"MTÜ AB Video järgib oma tegevuses kodanikuühenduste eetilise tegevuse üldtunnustatud põhimõtteid, mis on lühidalt kokkuvõetud 7..."
}
```
#### unshuffled_deduplicated_eu
- **Size of downloaded dataset files:** 134.68 MB
- **Size of the generated dataset:** 363.93 MB
- **Total amount of disk used:** 498.61 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "Gure jarduerek eraikuntzarekin, elkarbizitzarekin, hirigintzarekin eta ekologiarekin dute harremana, baita ideia eta konponbideak irudikatu eta garatzearekin ere, eraikuntza sektorea hobetuz, pertsonen erosotasuna eta bizi-kalitatea hobetzeko."
}
```
#### unshuffled_deduplicated_fa
- **Size of downloaded dataset files:** 10.46 GB
- **Size of the generated dataset:** 40.06 GB
- **Total amount of disk used:** 50.52 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"قـــــــــــــــــرار بود با هم کنـــــــــــــار بیایم نه اینکه از کنــــــــــــار هم رد بشیم...!!!\\nاگر روزی دلت لبریز غم بو..."
}
```
#### unshuffled_deduplicated_fi
- **Size of downloaded dataset files:** 5.38 GB
- **Size of the generated dataset:** 13.99 GB
- **Total amount of disk used:** 19.37 GB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Kiitos Deelle kaikesta - 1,5 viikkoa kulunut, kun Dee ei ole enää ollut omani. Reilu viikko sitten sunnuntaina vein Deen uuteen kotiinsa. Itselläni on ollut niin ristiriitaiset t..."
}
```
#### unshuffled_deduplicated_fr
- **Size of downloaded dataset files:** 55.46 GB
- **Size of the generated dataset:** 148.28 GB
- **Total amount of disk used:** 203.75 GB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "Média de débat d'idées, de culture et de littérature. Récits, décryptages, analyses, portraits et critiques autour de la vie des idées. Magazine engagé, ouvert aux autres et au monde.. Bring up to date in french"
}
```
#### unshuffled_deduplicated_frr
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Hiragana’ Practice’Sheet’1’(A -O)’ ’ Name:’________ __________________________’Section:’_______________ _’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ..."
}
```
#### unshuffled_deduplicated_fy
- **Size of downloaded dataset files:** 10.27 MB
- **Size of the generated dataset:** 26.73 MB
- **Total amount of disk used:** 37.00 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Nim in sêfte ride op Holmsjön, yn ien fan 'e lytse marren yn de omkriten, of nim se op avontueren lykas nonresidential. lâns Indalsälven wetter. Holm Sportklubb hawwe kano 's te huur, yn gearwurking mei de Baltyske Power konferinsje."
}
```
#### unshuffled_deduplicated_ga
- **Size of downloaded dataset files:** 22.22 MB
- **Size of the generated dataset:** 63.86 MB
- **Total amount of disk used:** 86.08 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Is fóram é seo chun plé a dhéanamh ar an leabhar atá roghnaithe do mhí na Samhna 2013 amháin. Ní féidir ach le baill chláraithe..."
}
```
#### unshuffled_deduplicated_gd
- **Size of downloaded dataset files:** 0.42 MB
- **Size of the generated dataset:** 1.36 MB
- **Total amount of disk used:** 1.78 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "Zhou Yujun, a 'phàrtaidh Rùnaire Comataidh Sgìre Yanfeng ann Hengyang bhaile agus a Sgìre pàrtaidh agus an riaghaltas a' bhuidheann-riochdachaidh a 'tighinn a chèilidh air ar companaidh air Apr. 14, 2017."
}
```
#### unshuffled_deduplicated_gl
- **Size of downloaded dataset files:** 155.85 MB
- **Size of the generated dataset:** 408.34 MB
- **Total amount of disk used:** 564.19 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"O persoal de Inditex da provincia de Pontevedra segue a reclamar iguais condicións laborais no conxunto do país - CIG: Confeder..."
}
```
#### unshuffled_deduplicated_gn
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.02 MB
- **Total amount of disk used:** 0.03 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"º ÑÆÚÓ À Ã Ð É Æ ¾ ÄÂ Î À ¼ Æ É ÄÛ = Ü Ý\\\"Þ ßà á â ã ä å æçè ã é ê â å àë ì æê íî é á ë ï í çì àð í Ü à ñ ê é ò ä ì\"..."
}
```
#### unshuffled_deduplicated_gom
- **Size of downloaded dataset files:** 0.38 MB
- **Size of the generated dataset:** 1.87 MB
- **Total amount of disk used:** 2.24 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"दुष्ट शीळ हें कौरवांचें । रामें सविस्तर देखूनि साचें । बोलिले वचनें जें दुर्वाचे । करी तयांचें अनुस्मरण ॥२२०॥\"..."
}
```
#### unshuffled_deduplicated_gu
- **Size of downloaded dataset files:** 162.97 MB
- **Size of the generated dataset:** 759.34 MB
- **Total amount of disk used:** 922.32 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"અધિક માસ ચાલે છે. સમગ્ર ભારતમાં અને તેમાંય ખાસ કરીને પવિત્ર કે ધાર્મિક કહેવાય છે તેવા સ્થાનક પર કથાનો દોર ચાલે છે. ઉનાળાની કાળઝ..."
}
```
#### unshuffled_deduplicated_he
- **Size of downloaded dataset files:** 3.04 GB
- **Size of the generated dataset:** 10.47 GB
- **Total amount of disk used:** 13.51 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"זקוקים לרשתות נגד יתושים? מחפשים רשת מתאימה לחלון צר וקטן? רשתות נגד יתושים אקורדיון של חברת קליר-מש הן הפתרון.\\nרשתות לחלונות ..."
}
```
#### unshuffled_deduplicated_hi
- **Size of downloaded dataset files:** 2.01 GB
- **Size of the generated dataset:** 9.57 GB
- **Total amount of disk used:** 11.58 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"'आइटम गर्ल' बनकर हिट हुई थीं राखी सावंत, आज करीना-कटरीना तक फॉलो कर रही हैं ट्रेंड नक्सलियों का दम निकालेगा बाइक ग्रेनेड लॉन्च..."
}
```
#### unshuffled_deduplicated_hr
- **Size of downloaded dataset files:** 46.74 MB
- **Size of the generated dataset:** 121.50 MB
- **Total amount of disk used:** 168.23 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"U raspravi je sudjelovao i HSS-ov saborski zastupnik rekavši kako poljoprivrednici ne osjete mjere o kojima ministar govori jer..."
}
```
#### unshuffled_deduplicated_hsb
- **Size of downloaded dataset files:** 0.72 MB
- **Size of the generated dataset:** 1.89 MB
- **Total amount of disk used:** 2.61 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Budyšin (SN/BŠe). Elektronikarjo mějachu lětsa cyle hinaši zazběh do swojeho wukubłanja. Wokrjesne rjemjeslnistwo bě mjenujcy w..."
}
```
#### unshuffled_deduplicated_ht
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan..."
}
```
#### unshuffled_deduplicated_hu
- **Size of downloaded dataset files:** 7.37 GB
- **Size of the generated dataset:** 19.09 GB
- **Total amount of disk used:** 26.46 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"monster - Amatőr, házi szex videók és kezdő csjaok pornó filmjei. - Free amateur, home made sex videos and online porn movies. ..."
}
```
#### unshuffled_deduplicated_hy
- **Size of downloaded dataset files:** 393.62 MB
- **Size of the generated dataset:** 1.56 GB
- **Total amount of disk used:** 1.96 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Արցախի Հանրապետության հռչակման 26-րդ տարեդարձի կապակցությամբ Շուշիի Արվեստի կենտրոնում կազմակերպվել է մոսկվաբնակ նկարիչներ՝ հայ..."
}
```
#### unshuffled_deduplicated_ia
- **Size of downloaded dataset files:** 0.05 MB
- **Size of the generated dataset:** 0.38 MB
- **Total amount of disk used:** 0.43 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha h..."
}
```
#### unshuffled_deduplicated_id
- **Size of downloaded dataset files:** 6.00 GB
- **Size of the generated dataset:** 17.05 GB
- **Total amount of disk used:** 23.05 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Perihal dari itu, kalau kunci hal yang demikian hilang, pemilik wajib melapor ke bengkel sah untuk dibuatkan kunci baru dengan ..."
}
```
#### unshuffled_deduplicated_ie
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "Plastic Yo Yo Metal Yo Yos Wooden Yo Yo Keychain Yo Yo Translucent Yo Yo Light Up Yo Yo Globe Yo Yo Stress Reliever Yo Yo Jellyfish Yo Yo Sports Ball Yo Yo Sound Yo Yo Miniature Yo Yo Promotional Yo Yo Novelty Yo Yo Video Game Yo Yo ECO Recycled Yo Yo"
}
```
#### unshuffled_deduplicated_ilo
- **Size of downloaded dataset files:** 0.23 MB
- **Size of the generated dataset:** 0.68 MB
- **Total amount of disk used:** 0.91 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Segun ken ni Ping-ay, ti yellow corn ti maysa kadagiti nadakamat a liberalized agricultural commodity iti daytoy a free trade k..."
}
```
#### unshuffled_deduplicated_io
- **Size of downloaded dataset files:** 0.04 MB
- **Size of the generated dataset:** 0.14 MB
- **Total amount of disk used:** 0.19 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Chekia esas parlamentala republiko. La chefo di stato esas la prezidanto. Til 2013 lu elektesis dal parlamento. Pos ta yaro, ol..."
}
```
#### unshuffled_deduplicated_is
- **Size of downloaded dataset files:** 332.87 MB
- **Size of the generated dataset:** 894.28 MB
- **Total amount of disk used:** 1.23 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Eyjar.net - upplýsinga- og fréttamiðill um Vestmannaeyjar - Fréttir - Nái núverandi stefna stjórnvalda fram að ganga mun það va..."
}
```
#### unshuffled_deduplicated_it
- **Size of downloaded dataset files:** 27.93 GB
- **Size of the generated dataset:** 74.09 GB
- **Total amount of disk used:** 102.03 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Jaundice - causes, treatment & pathology massaggio a osteochondrosis dellindizio di una controindicazione\\nTrattamento su un co..."
}
```
#### unshuffled_deduplicated_ja
- **Size of downloaded dataset files:** 40.80 GB
- **Size of the generated dataset:** 113.63 GB
- **Total amount of disk used:** 154.44 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"神社などへ一緒に同行して、様々な角度のショットで家族写真やお子様の写真を撮影致します!お好みに合わせて様々な写真を取ることができますので、その場でカメラマンへのリクエストも可能です!お子様の晴れ姿を、緊張していない自然な笑顔で残しませんか?\\n※七五三の..."
}
```
#### unshuffled_deduplicated_jbo
- **Size of downloaded dataset files:** 0.20 MB
- **Size of the generated dataset:** 0.70 MB
- **Total amount of disk used:** 0.91 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "ni'o 23 la cimast. cu 23moi djedi fi'o masti la cimast. noi ke'a cu cimoi masti .i 22 la cimast. cu purlamdei .ije 24 la cimast. cu bavlamdei"
}
```
#### unshuffled_deduplicated_jv
- **Size of downloaded dataset files:** 0.21 MB
- **Size of the generated dataset:** 0.62 MB
- **Total amount of disk used:** 0.82 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"José Mourinho (diwaca: [ʒuˈzɛ moˈɾiɲu]; lair ing Setubal, Portugal, 26 Januari 1963; umur 55 taun) iku salah siji pelatih bal k..."
}
```
#### unshuffled_deduplicated_ka
- **Size of downloaded dataset files:** 377.23 MB
- **Size of the generated dataset:** 1.99 GB
- **Total amount of disk used:** 2.36 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"წამიყვანე შენთან ერთად (ქართულად) / Возьми меня с собой (картулад) / (რუსული სერიალები ქართულად) (რუსების პორნო ონლაინში) (ruse..."
}
```
#### unshuffled_deduplicated_kk
- **Size of downloaded dataset files:** 389.12 MB
- **Size of the generated dataset:** 1.59 GB
- **Total amount of disk used:** 1.97 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Түлкібас ауданында «Латын негізді әліпби мен емле ережесі туралы насихат» жобасының тобы семинар өткізді\\nЕлорданың «Қазақстан»..."
}
```
#### unshuffled_deduplicated_km
- **Size of downloaded dataset files:** 114.48 MB
- **Size of the generated dataset:** 610.61 MB
- **Total amount of disk used:** 725.09 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ខ្សឹបដាក់ត្រចៀក៖ លោក សួស សុផានិត នាយផ្នែករដ្ឋបាលព្រៃឈើ ស្រុកភ្នំក្រវាញ់ ដែលទើបឡើងកាន់តំណែងថ្មី បើកដៃឲ្យឈ្នួញ ប្រព្រឹត្តបទល្មើស ..."
}
```
#### unshuffled_deduplicated_kn
- **Size of downloaded dataset files:** 215.52 MB
- **Size of the generated dataset:** 1.08 GB
- **Total amount of disk used:** 1.30 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ರಾಷ್ಟ್ರಪತಿ ಪ್ರಣಬ್ ಮುಖರ್ಜಿಯಿಂದ ಪದ್ಮ ಪ್ರಶಸ್ತಿ ಪ್ರದಾನ | President Pranab Mukherjee Confers Padma Awards | Photo Gallery on Kannada..."
}
```
#### unshuffled_deduplicated_ko
- **Size of downloaded dataset files:** 4.46 GB
- **Size of the generated dataset:** 12.00 GB
- **Total amount of disk used:** 16.47 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"CIA 프로젝트에서는 데이터베이스로 들어오는 요청을 중간에 수집(Sniffing)하고 수집한 데이터를 분석(Parsing)하여 그로 인한 결과를 판단하여 알릴 수 있는 시스템(Push Service)이 필요하다. 그리고 연구를 ..."
}
```
#### unshuffled_deduplicated_krc
- **Size of downloaded dataset files:** 0.62 MB
- **Size of the generated dataset:** 2.41 MB
- **Total amount of disk used:** 3.03 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Шамханланы, Бийлени къаршысына ябушуп, Батыр уланларыбызны къоллары булан «ортакъ ожакъ» къургъанбыз. Шо иш уллу зараллы иш бол..."
}
```
#### unshuffled_deduplicated_ku
- **Size of downloaded dataset files:** 23.34 MB
- **Size of the generated dataset:** 63.09 MB
- **Total amount of disk used:** 86.43 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Me di 114 bernameyên xwe yên berê da perçeyên ji berhemên zanyarî yên kurdzanên mezin bi wergera kurdî da ...\\nMe di 114 bernam..."
}
```
#### unshuffled_deduplicated_kv
- **Size of downloaded dataset files:** 0.33 MB
- **Size of the generated dataset:** 1.21 MB
- **Total amount of disk used:** 1.54 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Коми кытшыслӧн ыджытжык тор вӧр увтын куйлӧ, сійӧн и фаунасӧ татӧн аркмӧтӧны вӧрын олісь подаэз. Ассямаӧн лоӧ сія, мый кытшас с..."
}
```
#### unshuffled_deduplicated_kw
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.02 MB
- **Total amount of disk used:** 0.02 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼Pray without ceasing🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏..."
}
```
#### unshuffled_deduplicated_ky
- **Size of downloaded dataset files:** 106.22 MB
- **Size of the generated dataset:** 408.40 MB
- **Total amount of disk used:** 514.61 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Turmush: Бишкек шаардык кеңешинин кезексиз отурумунда мэрге ишенбөөчүлүк көрсөтүү маселеси каралат, - депутат Т.Сагынов\\nБишкек..."
}
```
#### unshuffled_deduplicated_la
- **Size of downloaded dataset files:** 3.42 MB
- **Size of the generated dataset:** 9.79 MB
- **Total amount of disk used:** 13.22 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Hæ sunt generationes Noë: Noë vir justus atque perfectus fuit in generationibus suis; cum Deo ambulavit.\\nEcce ego adducam aqua..."
}
```
#### unshuffled_deduplicated_lb
- **Size of downloaded dataset files:** 8.30 MB
- **Size of the generated dataset:** 21.42 MB
- **Total amount of disk used:** 29.72 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Während dem Gaardefestival \\\"Ambiance Jardins\\\" vum 15. bis de 17. Mee huet den SNJ nees zesumme mam Groupe Animateur en Inform..."
}
```
#### unshuffled_deduplicated_lez
- **Size of downloaded dataset files:** 0.77 MB
- **Size of the generated dataset:** 3.08 MB
- **Total amount of disk used:** 3.84 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Ахцегь хуьр, виридалай ч1ехи лезги хуьрерикая я. Ам Урусатдин виридалай къиблепатавай хуьрерикай я. Ин хуьр...\"..."
}
```
#### unshuffled_deduplicated_li
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.03 MB
- **Total amount of disk used:** 0.04 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"'t Good Goedenraad aan de Ezerbaek besjteit oet 'n kesjtièl mèt gesjlote haof en 'n park van 26 hectare. Hie in sjtoon väól beu..."
}
```
#### unshuffled_deduplicated_lmo
- **Size of downloaded dataset files:** 0.10 MB
- **Size of the generated dataset:** 0.46 MB
- **Total amount of disk used:** 0.57 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Serét (en tortonés: Sregh; en piemontés: Srèj) l'è 'n cümü italià, de la regiù del Piemónt, en Pruvìncia de Alessandria. El g'h..."
}
```
#### unshuffled_deduplicated_lo
- **Size of downloaded dataset files:** 23.63 MB
- **Size of the generated dataset:** 119.29 MB
- **Total amount of disk used:** 142.92 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"ຜູ້ພິພາກສາ ປະຈຳເຂດ ສຫລ ທ່ານນຶ່ງ ຕັດສິນວ່າ ໂຄງການເກັບກຳຂໍ້ມູນ ທາງໂທລະສັບ ຂອງອົງການ ຄວາມໝັ້ນຄົງແຫ່ງຊາດ ແມ່ນຖືກຕ້ອງ ຕາມກົດໝາຍ.\\nກະ..."
}
```
#### unshuffled_deduplicated_lrc
- **Size of downloaded dataset files:** 0.02 MB
- **Size of the generated dataset:** 0.06 MB
- **Total amount of disk used:** 0.08 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"آرلینگتون یئ گئل د شأریا ڤولاتچە ڤیرجینیا و یئ گئل د شأریا ڤولات ڤولاتچە یا یأکاگئرئتە ئمریکاە. ئی شأر دویومی کألوٙن شأر د راسا..."
}
```
#### unshuffled_deduplicated_lt
- **Size of downloaded dataset files:** 1.65 GB
- **Size of the generated dataset:** 4.20 GB
- **Total amount of disk used:** 5.86 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Čir vir vir pavasaris! Čia čia čia… dalinamės labai simpatiška video pamokėle, kurią pristato ab888art galerija.\\nBe galo papra..."
}
```
#### unshuffled_deduplicated_lv
- **Size of downloaded dataset files:** 710.45 MB
- **Size of the generated dataset:** 1.91 GB
- **Total amount of disk used:** 2.62 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Dekoratīvi sliekšņi MITSUBISHI OUTLANDER 2007, izgatavoti no ovālas formas, pulētas nerūsējošā tērauda caurules...\\ndažādas tūn..."
}
```
#### unshuffled_deduplicated_mai
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.01 MB
- **Total amount of disk used:** 0.01 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"१ · २ · ३ · ४ · ५ · ६ · ७ · ८ · ९ · १० · ११ · १२ · १३ · १४ · १५ · १६ · १७ · १८ · १९ · २० · २१ · २२ · २३ · २४ · २५ · २६ · २७ · २..."
}
```
#### unshuffled_deduplicated_mg
- **Size of downloaded dataset files:** 4.30 MB
- **Size of the generated dataset:** 13.59 MB
- **Total amount of disk used:** 17.89 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Nanamboatra taratasy apetaka sy soso-kevitra ho an'ny olona te-hanatevin-daharana ity fihetsiketsehana ity i Anocrena.\\nNosorat..."
}
```
#### unshuffled_deduplicated_mhr
- **Size of downloaded dataset files:** 1.63 MB
- **Size of the generated dataset:** 6.26 MB
- **Total amount of disk used:** 7.89 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Акрет жап годым Уганда кундемым Пигмей племена- влак айлен шогеныт. мемнан эран 1 курым гыч Банту племена влакат тиде кундемышк..."
}
```
#### unshuffled_deduplicated_min
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.31 MB
- **Total amount of disk used:** 0.33 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\" ..."
}
```
#### unshuffled_deduplicated_mk
- **Size of downloaded dataset files:** 303.12 MB
- **Size of the generated dataset:** 1.19 GB
- **Total amount of disk used:** 1.49 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"„Филм плус“ е насловен првиот филмски месечник во Македонија, чиј прв број ќе биде промовиран вечер во „Менада“. Новото македон..."
}
```
#### unshuffled_deduplicated_ml
- **Size of downloaded dataset files:** 496.80 MB
- **Size of the generated dataset:** 2.69 GB
- **Total amount of disk used:** 3.18 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"സ്ത്രീ പ്രവേശനം സര്ക്കാര് പൂര്ണമായും അംഗീകരിക്കുന്നുവെന്നും ശബരിമലയുടെ സുരക്ഷയില് ഇടപെടുമെന്നും സര്ക്കാര് ഹൈക്കോടതിയില്\\..."
}
```
#### unshuffled_deduplicated_mn
- **Size of downloaded dataset files:** 219.52 MB
- **Size of the generated dataset:** 883.46 MB
- **Total amount of disk used:** 1.10 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"МУБИС-ын багш мэргэжлийн хөрвөх сургалтыг төгссөн багшид багшлах эрх олгох тухай ~ БМДИ-ийн захирлын тушаал - Багшийн мэргэжил ..."
}
```
#### unshuffled_deduplicated_mr
- **Size of downloaded dataset files:** 299.68 MB
- **Size of the generated dataset:** 1.49 GB
- **Total amount of disk used:** 1.79 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Home / motivational marathi story / उद्योजकता (Entrepreneurship) / यांना हे जमलय, तर आपल्याला का नाही जमणार ?\\nयापैकी कोणाचीही ..."
}
```
#### unshuffled_deduplicated_mrj
- **Size of downloaded dataset files:** 0.29 MB
- **Size of the generated dataset:** 1.10 MB
- **Total amount of disk used:** 1.38 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Лӹпӹвлӓ (латинлӓ Lepidoptera ; алыкмарла лыве-влак) — капшангывлӓ йыхыш пырышы сӱмӓн нӹл шылдыран капшангывлӓ. Цилӓжӹ 180000 тӹ..."
}
```
#### unshuffled_deduplicated_ms
- **Size of downloaded dataset files:** 16.39 MB
- **Size of the generated dataset:** 49.45 MB
- **Total amount of disk used:** 65.85 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Sanad pertama daripada Zuhair bin Harb daripada ‘Affan daripada Hammad daripada Thabit daripada Anas.\\nSanad kedua daripada ‘Ab..."
}
```
#### unshuffled_deduplicated_mt
- **Size of downloaded dataset files:** 5.90 MB
- **Size of the generated dataset:** 17.68 MB
- **Total amount of disk used:** 23.58 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "tibgħat il-kawża lura lill-Qorti Ġenerali għall-annullament jew għat-tnaqqis tal-penalità imposta mill-Kummissjoni bid-deċiżjoni inizjali kif emendata bid-deċiżjoni ta’ rettifika;"
}
```
#### unshuffled_deduplicated_mwl
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Deciplina social i outónoma que angloba atebidades de ouserbaçon, de análeze, de çcriçon, cumparaçon, de sistematizaçon i de sp..."
}
```
#### unshuffled_deduplicated_my
- **Size of downloaded dataset files:** 207.14 MB
- **Size of the generated dataset:** 1.11 GB
- **Total amount of disk used:** 1.32 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ျမ၀တီ - ရန္ကုန္တိုင္းေဒသႀကီး ေျမာက္ဥကၠလာပႏွင္႕ ဗဟန္းၿမိဳ႔နယ္ မေကြးတိုင္း ေဒသႀကီး ပခုကၠဴၿမိဳ႔နယ္တို႔၌ ျမန္မာ႕တပ္မေတာ္အား ေထာက္ခံ..."
}
```
#### unshuffled_deduplicated_myv
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"2018 иень умарьковонь 6-це чистэ сась паро куля! Россиянь культурань Министерствась макссь невтемань конёв (прокатной удостовер..."
}
```
#### unshuffled_deduplicated_mzn
- **Size of downloaded dataset files:** 0.16 MB
- **Size of the generated dataset:** 0.63 MB
- **Total amount of disk used:** 0.79 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"قرآن یا قوران اسلام ِآسمونی کتاب هسته. مسلمونون گانّّه قرآن ره خدا، وحی جه برسنییه، «محمد معجزه» هسته و ثقلین حدیث دله ونه خَو..."
}
```
#### unshuffled_deduplicated_nah
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.01 MB
- **Total amount of disk used:** 0.01 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "In mācuīlpōhualxihuitl VI (inic chicuacē) in mācuīlpōhualli xiuhitl cāhuitl īhuīcpa 501 xihuitl oc 600 xihuitl."
}
```
#### unshuffled_deduplicated_nap
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.01 MB
- **Total amount of disk used:** 0.02 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ò AUDIT í Ç è î ÿ å å 30 ò ÿ ÿ é, õ ñ ì ÿ, ê ã- ò à ì. å â å í ç â à à é ñ è å é ó ó ë. å å å û è å î é è à. à è à AUDIT 1-7 â ..."
}
```
#### unshuffled_deduplicated_nds
- **Size of downloaded dataset files:** 5.27 MB
- **Size of the generated dataset:** 13.48 MB
- **Total amount of disk used:** 18.76 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Dor kann sik vun nu af an de hele plattdüütsche Welt – vun Niebüll bit New York, vun Helgoland bit Honolulu – drapen. Allens, w..."
}
```
#### unshuffled_deduplicated_ne
- **Size of downloaded dataset files:** 240.63 MB
- **Size of the generated dataset:** 1.24 GB
- **Total amount of disk used:** 1.48 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"बर्दिबास नगरपालिकाको तेस्रो नगर परिषदबाट पारित आ.व.२०७३।७४ को संशोधित र २०७४।७५ को प्रस्तावित नीति, कार्यक्रम तथा बजेट\\nअार्थिक..."
}
```
#### unshuffled_deduplicated_new
- **Size of downloaded dataset files:** 0.83 MB
- **Size of the generated dataset:** 4.26 MB
- **Total amount of disk used:** 5.09 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"थ्व शहरयागु अक्षांश ३४.७००१६४ उत्तर व देशान्तर ८६.३७६४६९ पश्चिम खः (34.700164° N 86.376469° W)। थ्व थासे ७२२६७३२ वर्ग मिटर (२.७..."
}
```
#### unshuffled_deduplicated_nl
- **Size of downloaded dataset files:** 15.73 GB
- **Size of the generated dataset:** 41.91 GB
- **Total amount of disk used:** 57.65 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Op vrijdag 31 augustus wordt het nieuwe studiejaar van de masteropleiding architectuur geopend met een dagexcursie naar Venlo.\\..."
}
```
#### unshuffled_deduplicated_nn
- **Size of downloaded dataset files:** 23.58 MB
- **Size of the generated dataset:** 58.32 MB
- **Total amount of disk used:** 81.90 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "Planomtale krav til innhald Bakgrunn: Spørsmål frå fleire kommunar om kva ein planomtale/planbeskrivelse bør innehalde Fylkeskommunen og fylkesmannen har i ein del saker reist motsegn på formelt grunnlag"
}
```
#### unshuffled_deduplicated_no
- **Size of downloaded dataset files:** 1.96 GB
- **Size of the generated dataset:** 5.11 GB
- **Total amount of disk used:** 7.07 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Ytterligere aktører i primærhelsetjenesten og andre NHS-virksomheter ble infisert, inkludert legekontor.Læreren vår er så attra..."
}
```
#### unshuffled_deduplicated_oc
- **Size of downloaded dataset files:** 1.34 MB
- **Size of the generated dataset:** 4.00 MB
- **Total amount of disk used:** 5.34 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": ".рф (rf, còdi punycode: .xn--p1ai)[1] es lo nom de domeni en rus per Russia. Foguèt activat lo 12 de mai de 2010. Lo còdi latin es .ru."
}
```
#### unshuffled_deduplicated_or
- **Size of downloaded dataset files:** 38.72 MB
- **Size of the generated dataset:** 197.63 MB
- **Total amount of disk used:** 236.36 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ଭୁବନେଶ୍ୱର, ୨୭/୧– (ଓଡ଼ିଆ ପୁଅ) ସିପିଆଇ ଜାତୀୟ ପରିଷଦର ଆହ୍ୱାନକ୍ରମେ ଗତକାଲି ଜାନୁୟାରୀ ୨୬ ସାଧାରଣତନ୍ତ୍ର ଦିବସକୁ ଦେଶ ବ୍ୟାପୀ ସମ୍ବିଧାନ ସୁରକ୍ଷା ..."
}
```
#### unshuffled_deduplicated_os
- **Size of downloaded dataset files:** 2.83 MB
- **Size of the generated dataset:** 11.00 MB
- **Total amount of disk used:** 13.83 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"1. Лæппу æмæ чызг казрæдзийы зæрдæмæ куы фæцæуынц æмæ, куы сфæнд кæнынц сæ цард баиу кæнын, уæд лæппу бар ракуры чызгæй, цæмæй ..."
}
```
#### unshuffled_deduplicated_pa
- **Size of downloaded dataset files:** 102.39 MB
- **Size of the generated dataset:** 483.04 MB
- **Total amount of disk used:** 585.42 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ਰਜਿ: ਨੰ: PB/JL-138/2018-20 ਜਿਲਦ 63, ਬਾਨੀ ਸੰਪਾਦਕ (ਸਵ:) ਡਾ: ਸਾਧੂ ਸਿੰਘ ਹਮਦਰਦ ਫ਼ੋਨ : 0181-2455961-62-63, 5032400, ਫੈਕਸ : 2455960, 2..."
}
```
#### unshuffled_deduplicated_pam
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Áku pu i Anak ning Aláya at ngeni ipákit kó kékayu ngan nûng makanánu lang susúlat détinang kulit a mágkas. Lauan ya ing tarátu..."
}
```
#### unshuffled_deduplicated_pl
- **Size of downloaded dataset files:** 20.19 GB
- **Size of the generated dataset:** 50.59 GB
- **Total amount of disk used:** 70.78 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"System informatyczny - Załącznik nr 1 do zarządzenia Wójta Gminy Podegrodzie Nr 530/2013 z dnia 27 maja 2013 r\\nSystem informat..."
}
```
#### unshuffled_deduplicated_pms
- **Size of downloaded dataset files:** 0.71 MB
- **Size of the generated dataset:** 2.00 MB
- **Total amount of disk used:** 2.72 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Louvigné-du-Désert a l'é na comun-a fransèisa ant la region aministrativa dla Brëtagna, ant ël dipartiment d'Ille-et-Vilaine. A..."
}
```
#### unshuffled_deduplicated_pnb
- **Size of downloaded dataset files:** 2.58 MB
- **Size of the generated dataset:** 9.44 MB
- **Total amount of disk used:** 12.02 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"ایہ فائل Wikimedia Commons توں اے تے دوجیاں ویونتاں تے وی ورتی جاےکدی اے۔ گل بات اس دے فائل گل بات صفہ تے تھلے دتی گئی۔\"..."
}
```
#### unshuffled_deduplicated_ps
- **Size of downloaded dataset files:** 71.83 MB
- **Size of the generated dataset:** 254.79 MB
- **Total amount of disk used:** 326.61 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Many people usually use the time period ‘business to business (B2B) advertising,’ however most of them do not know precisely wh..."
}
```
#### unshuffled_deduplicated_pt
- **Size of downloaded dataset files:** 26.00 GB
- **Size of the generated dataset:** 68.37 GB
- **Total amount of disk used:** 94.37 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Você pode estar lendo este texto no sofá, levantar pra pegar uma breja na geladeira, dar uma cagada e sentar novamente, sem int..."
}
```
#### unshuffled_deduplicated_qu
- **Size of downloaded dataset files:** 0.02 MB
- **Size of the generated dataset:** 0.07 MB
- **Total amount of disk used:** 0.09 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Warayu wichay (kastilla simipi: Ascensión de Guarayos) nisqaqa Buliwya mama llaqtapi, Santa Krus suyupi, huk llaqtam, Warayu pruwinsyap uma llaqtanmi."
}
```
#### unshuffled_deduplicated_rm
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.01 MB
- **Total amount of disk used:** 0.01 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"practicists agrars / practicistas agraras AFP pon far ina furmaziun da basa scursanida per cuntanscher in attestat federal da q..."
}
```
#### unshuffled_deduplicated_ro
- **Size of downloaded dataset files:** 4.48 GB
- **Size of the generated dataset:** 11.66 GB
- **Total amount of disk used:** 16.14 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"“În viață, oportunitatea nu este totul. Cine atrage Lumina, cineva bun în umbră. Timpul ne creează.” maestru\\nLyn.Evans: Ce mar..."
}
```
#### unshuffled_deduplicated_ru
- **Size of downloaded dataset files:** 166.68 GB
- **Size of the generated dataset:** 611.70 GB
- **Total amount of disk used:** 778.38 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Доступ к данному профилю для публичного просмотра закрыт администрацией сайта - профиль находится на модерации.\\nРазработчикам ..."
}
```
#### unshuffled_deduplicated_sa
- **Size of downloaded dataset files:** 7.27 MB
- **Size of the generated dataset:** 38.33 MB
- **Total amount of disk used:** 45.60 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"अनिरुद्धनगरे क्रीडिता रामलीला सम्प्रति समाप्ता अस्ति । तस्य कानिचन् चित्राणि पूर्वमेव प्रकाशितानि सन्ति । द्वौ चलचित्रौ अपि ..."
}
```
#### unshuffled_deduplicated_sah
- **Size of downloaded dataset files:** 7.01 MB
- **Size of the generated dataset:** 27.46 MB
- **Total amount of disk used:** 34.49 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████..."
}
```
#### unshuffled_deduplicated_scn
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "La gilusìa è nu sintimentu dulurusu ca nasci d'un disideriu di pussessu sclusivu ntê cunfrunti dâ pirsuna amata e dû timuri, dû suspettu o dâ cirtizza dâ sò nfidiltati."
}
```
#### unshuffled_deduplicated_sd
- **Size of downloaded dataset files:** 74.17 MB
- **Size of the generated dataset:** 275.48 MB
- **Total amount of disk used:** 349.66 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"هر ڪو ڄاڻي ٿو ته جڏهن توهان هڪ وڏي خريد ڪرڻ چاهيون ٿا, توهان پڄي ضروري حڪم ۾ ان جي ڪم ڪرڻ جي هٿ ۾ لاڳاپو ڪيو آهي. جي شيء آهي ته..."
}
```
#### unshuffled_deduplicated_sh
- **Size of downloaded dataset files:** 1.45 MB
- **Size of the generated dataset:** 6.44 MB
- **Total amount of disk used:** 7.87 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Opština Gornja Radgona se nalazi u sjeveroistočnoj Sloveniji i graniči s susjednom Austriji duž rijeke Mure. Sa tridesetim nase..."
}
```
#### unshuffled_deduplicated_si
- **Size of downloaded dataset files:** 175.62 MB
- **Size of the generated dataset:** 842.57 MB
- **Total amount of disk used:** 1.02 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"ලාංකීය සිතිවිලි සිංහල බ්ලොග් කියවනය කොත්තු සින්ඩිය ලංකා Blogger හත්මාළුව ලංකා බ්ලොග් කියවනය මාතලන්ගේ සින්ඩිය මොබයිල්lk\\nඅවකාශය ..."
}
```
#### unshuffled_deduplicated_sk
- **Size of downloaded dataset files:** 1.96 GB
- **Size of the generated dataset:** 4.80 GB
- **Total amount of disk used:** 6.76 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Aktivity | Agentúra podporovaného zamestnávania | vzdelávanie pre klientov, vzdelávanie pre odborníkov, kurzy\\nŠpecializované k..."
}
```
#### unshuffled_deduplicated_sl
- **Size of downloaded dataset files:** 523.22 MB
- **Size of the generated dataset:** 1.32 GB
- **Total amount of disk used:** 1.85 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Če Creatures, ki je želel, da pridejo na čas, predvsem je povedlo – razlikuje od ljubosumja začel grizenja kolen (ali zadnjica)..."
}
```
#### unshuffled_deduplicated_so
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.02 MB
- **Total amount of disk used:** 0.02 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"тттттттттттттттттттттттттттттттт тттттттттттттттттттттттттттттттт тттттттттттттттттттттттттттттттт ттттттттттттттттуууууууууууу..."
}
```
#### unshuffled_deduplicated_sq
- **Size of downloaded dataset files:** 445.36 MB
- **Size of the generated dataset:** 1.21 GB
- **Total amount of disk used:** 1.66 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Çfarë do të më pëlqente tek një femër ose çfarë do të më shndërronte në një shpërthim drite? – Albert Vataj\\nTë gjithëve një zo..."
}
```
#### unshuffled_deduplicated_sr
- **Size of downloaded dataset files:** 665.03 MB
- **Size of the generated dataset:** 2.36 GB
- **Total amount of disk used:** 3.03 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Корисни савети за сваки дан. На сајту су разне категорије, као што су љепота, мода, кување и поправка властитим рукама.\\nШколск..."
}
```
#### unshuffled_deduplicated_su
- **Size of downloaded dataset files:** 0.05 MB
- **Size of the generated dataset:** 0.16 MB
- **Total amount of disk used:** 0.21 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Kartu krédit nyaéta \"duit plastik\" anu dikaluarkeun ku bank pikeun alat pambayaran di tempat-tempat nu tangtu samisal jiga di hotél, réstoran, tempat rékréasi jeung sajabana.[1]"
}
```
#### unshuffled_deduplicated_sv
- **Size of downloaded dataset files:** 10.19 GB
- **Size of the generated dataset:** 26.33 GB
- **Total amount of disk used:** 36.51 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"1783 är ett viktigt årtal i den nya tidens historia. Det året slöts en fred i Paris och därmed blev de 13 brittiska kolonierna ..."
}
```
#### unshuffled_deduplicated_sw
- **Size of downloaded dataset files:** 2.95 MB
- **Size of the generated dataset:** 8.98 MB
- **Total amount of disk used:** 11.92 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Miripuko hiyo inakuja mwanzoni mwa Wiki Takatifu kuelekea Pasaka na ikiwa ni wiki chache tu kabla ya Papa Francis kuanza ziara yake katika nchi hiyo yenye idadi kubwa kabisa ya watu katika ulimwengu wa nchi za Kiarabu."
}
```
#### unshuffled_deduplicated_ta
- **Size of downloaded dataset files:** 971.12 MB
- **Size of the generated dataset:** 5.48 GB
- **Total amount of disk used:** 6.45 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"பொழுது சாய்ந்து வெகு நேரமாகிவிட்டது. கூலி வேலைக்குப் போயிருந்த 'சித்தாள் ' பெண்கள் எல்லோரும் வீடு திரும்பி விட்டார்கள். இன்னும்..."
}
```
#### unshuffled_deduplicated_te
- **Size of downloaded dataset files:** 342.43 MB
- **Size of the generated dataset:** 1.70 GB
- **Total amount of disk used:** 2.04 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"హర్యానాలో టోల్ దగ్గర సిబ్బంది.. స్థానిక ప్రజలు కొట్టుకున్నారు. కర్నాల్ అనే గ్రామానికి సమీపంలో టోల్ గేట్ ఉంది. అయితే సాధారణంగా స..."
}
```
#### unshuffled_deduplicated_tg
- **Size of downloaded dataset files:** 62.90 MB
- **Size of the generated dataset:** 261.68 MB
- **Total amount of disk used:** 324.60 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Ҳумайро гуфтааст, мухолифи низом аст, низоме, ки дар Тоҷикистон вуҷуд дорад. Ба ин маънӣ, худро мухолифи давлату ҳукумати Тоҷик..."
}
```
#### unshuffled_deduplicated_th
- **Size of downloaded dataset files:** 3.54 GB
- **Size of the generated dataset:** 17.11 GB
- **Total amount of disk used:** 20.65 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ฟันที่แลดูขาวสะอาดไม่มีเศษอาหารติดอยู่ เหงือกสีชมพู ไม่เจ็บ หรือมีเลือดออกเวลาแปรงฟันหรือขัดฟัน ไม่มีปัญหาเรื่องกลิ่นปาก ทำให้ก..."
}
```
#### unshuffled_deduplicated_tk
- **Size of downloaded dataset files:** 2.22 MB
- **Size of the generated dataset:** 7.12 MB
- **Total amount of disk used:** 9.34 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Türkmenistanyň Prezidenti agyr atletika boýunça dünýä çempionatyna taýýarlyk işleriniň barşy bilen tanyşdy\\nHalallykdan kemal t..."
}
```
#### unshuffled_deduplicated_tl
- **Size of downloaded dataset files:** 151.34 MB
- **Size of the generated dataset:** 431.69 MB
- **Total amount of disk used:** 583.04 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"“Gusto ko manawagan sa mga Unit Head ng Chanel 2 Salve. Kasi napapansin ko iyon mga alaga ko ang taping halos once a week lang,..."
}
```
#### unshuffled_deduplicated_tr
- **Size of downloaded dataset files:** 10.39 GB
- **Size of the generated dataset:** 28.47 GB
- **Total amount of disk used:** 38.86 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Son yıllarda görülen ay tutulmalarına göre daha etkili olacağı söylenen Kanlı veya Kırmızı Ay Tutulmasına saatler kaldı. Bu akş..."
}
```
#### unshuffled_deduplicated_tt
- **Size of downloaded dataset files:** 85.89 MB
- **Size of the generated dataset:** 321.37 MB
- **Total amount of disk used:** 407.26 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"\\\"Иремнең вафатына 40 көн узгач, Алмаз да безнең өйгә кереп үлде\\\". Арчада 35 яшьлек ир өстенә кондызлар ега башлаган агач төшк..."
}
```
#### unshuffled_deduplicated_tyv
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.01 MB
- **Total amount of disk used:** 0.01 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Экии, хүндүлуг аалчылар болгаш тыва дылдың деткикчилери! Тыва дылдың болгаш чогаалдың ховар бир башкызынга, Менги Ооржакка, ажы..."
}
```
#### unshuffled_deduplicated_ug
- **Size of downloaded dataset files:** 20.53 MB
- **Size of the generated dataset:** 86.44 MB
- **Total amount of disk used:** 106.97 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"زاڭ-ءتۇزىم | عىلىم-تەحنيكا | ءتىل-ادەبيەت | تۇرمىس | دەنە تاربيە | ساياحات-ورتا | سۋرەتتى حابار | سىر سۇحبات | ارناۋلى تاقىرىپ ..."
}
```
#### unshuffled_deduplicated_uk
- **Size of downloaded dataset files:** 8.04 GB
- **Size of the generated dataset:** 29.86 GB
- **Total amount of disk used:** 37.90 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Про надання роз'яснення (щодо форми письмового зобов'язання громадян про зворотне ввезення/вивезення товарів), Державна митна с..."
}
```
#### unshuffled_deduplicated_ur
- **Size of downloaded dataset files:** 483.59 MB
- **Size of the generated dataset:** 1.82 GB
- **Total amount of disk used:** 2.31 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"آئیے اہم اسلامی کتب کو یونیکوڈ میں انٹرنیٹ پر پیش کرنے کے لئے مل جل کر آن لائن ٹائپنگ کریں۔ محدث ٹائپنگ پراجیکٹ کے ذریعے آپ روز..."
}
```
#### unshuffled_deduplicated_uz
- **Size of downloaded dataset files:** 4.30 MB
- **Size of the generated dataset:** 12.00 MB
- **Total amount of disk used:** 16.29 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Qurama tog'lari tizmasining Toshkentdan 154 km uzoqlikdagi Toshkent-Ush yo'li yeqasidaxushmanzara tabiat qo'ynida joylashgan maydoni 30 ga.\nBolalarni sog'lomlashtirish oromgohi Bo'stonliq tumani Oqtosh muntaqasining soy-salqin gushasida joylashgan."
}
```
#### unshuffled_deduplicated_vec
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.02 MB
- **Total amount of disk used:** 0.02 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Par ogni pónto, ła derivada ła xe ła pendensa de ła reta tangente a ła curva de ła funsion f. Ła reta de cołor róso l'è senpre ..."
}
```
#### unshuffled_deduplicated_vi
- **Size of downloaded dataset files:** 10.71 GB
- **Size of the generated dataset:** 33.60 GB
- **Total amount of disk used:** 44.31 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Canh chua cá bông lau không chỉ là món ăn giải nhiệt, thanh mát ngày hè mà còn là món siêu bổ dưỡng, rất tốt cho người gầy ốm. ..."
}
```
#### unshuffled_deduplicated_vo
- **Size of downloaded dataset files:** 0.30 MB
- **Size of the generated dataset:** 2.10 MB
- **Total amount of disk used:** 2.40 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Sarniguet binon zif in ziläk: Hautes-Pyrénées, in topäd: Midi-Pyrénées, in Fransän. Sarniguet topon videtü 43°19’ 7’’ N e lunetü 0°5’ 19’’ L."
}
```
#### unshuffled_deduplicated_wa
- **Size of downloaded dataset files:** 0.08 MB
- **Size of the generated dataset:** 0.22 MB
- **Total amount of disk used:** 0.29 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Cisse pådje ci n' est co k' on djermon, dj' ô bén k' el pådje est djusse sibåtcheye, eyet co trop tene; et s' divreut ele ecråxhî ene miete."
}
```
#### unshuffled_deduplicated_war
- **Size of downloaded dataset files:** 0.55 MB
- **Size of the generated dataset:** 2.36 MB
- **Total amount of disk used:** 2.90 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "An Honce amo in usa ka baryo ngan munisipalidad ha distrito han Rožňava ha rehiyon han Košice ha nasod han Slovakia.\nAn Rumegies amo in usa ka komyun ha departamento han Nord ngan ha rehiyon han Nord-Pas-de-Calais ha nasod han Fransya."
}
```
#### unshuffled_deduplicated_wuu
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.03 MB
- **Total amount of disk used:** 0.04 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"伊春元旦天气 伊春腊八天气 伊春春节天气 伊春情人节天气 伊春元宵节天气 伊春愚人节天气 伊春清明节天气 伊春劳动节天气 伊春母亲节天气 伊春端午节天气 伊春七夕节天气 伊春教师节天气 伊春中秋节天气 伊春国庆节天气 伊春重阳节天气 伊春万圣节天气 伊春..."
}
```
#### unshuffled_deduplicated_xal
- **Size of downloaded dataset files:** 0.03 MB
- **Size of the generated dataset:** 0.12 MB
- **Total amount of disk used:** 0.15 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Арнгудин Орн гисн Европд бәәдг һазр. 2007 җилин тooһaр эн орн нутгт 3,600,523 әмтн бәәдг билә. Арнгудин Орнин хотл балһсна нерн..."
}
```
#### unshuffled_deduplicated_xmf
- **Size of downloaded dataset files:** 0.94 MB
- **Size of the generated dataset:** 4.63 MB
- **Total amount of disk used:** 5.58 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"მოჩამილი ტექსტი წჷმორინელი რე Creative Commons Attribution-ShareAlike ლიცენზიათ; შილებე გეძინელი პირობეფიშ არსებუა. კილიშკილიშა..."
}
```
#### unshuffled_deduplicated_yi
- **Size of downloaded dataset files:** 22.20 MB
- **Size of the generated dataset:** 88.29 MB
- **Total amount of disk used:** 110.49 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ממשותדיק - חבֿרה, איך אַרבעט איצט אױף אַ זשורנאַל. טאָמער איר האָט עפּעס צוצוגעבן זאָלט איר שיקן מיר אַן אָנזאָג. ס'װעט הײסן \\\"..."
}
```
#### unshuffled_deduplicated_yo
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.03 MB
- **Total amount of disk used:** 0.04 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Copyright © 2018 BBC. BBC kò mọ̀ nípa àwọn ohun tí ó wà ní àwọn ojú òpó tí ó wà ní ìta. Ọwọ́ tí a fi mú ìbáṣepọ̀ ti ìta.\"..."
}
```
#### unshuffled_deduplicated_yue
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"我 灌 我 灌 我 灌 灌 灌 我 灌 我 灌 我 灌 灌 灌 我 灌 我 灌 我 灌 灌 灌 我 灌 我 灌 我 灌 灌 灌 我 灌 我 灌 我 灌 灌 灌 我 灌 我 灌 我 灌 灌 灌 你還不爆 我累了 投降輸一半可以嗎\"..."
}
```
#### unshuffled_deduplicated_zh
- **Size of downloaded dataset files:** 99.98 GB
- **Size of the generated dataset:** 267.88 GB
- **Total amount of disk used:** 367.86 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"中国铝灰网 中国有色金属矿产网 中国黄莲网 中国水轮发电机网 中国抽油泵网 中国数控雕刻机网 中国不锈钢抛光网 中国磨具加工网 中国压铸铝网 中国耐水腻子网 中国手机摄像头网 中国粗粮网 中国车门锁网 中国钛粉网 中国轮圈网\\n天天中奖彩票图 天天中彩票..."
}
```
</details>
<details>
<summary>Click to expand the Data/size information for each language (original)</summary>
#### unshuffled_original_af
- **Size of downloaded dataset files:** 85.79 MB
- **Size of the generated dataset:** 254.08 MB
- **Total amount of disk used:** 339.87 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "aanlyn markte as gevolg van ons voortgesette 'n begrip opsie handel sakeplan pdf terwyl ons steeds die gereelde ons binêre opsies handel"
}
```
#### unshuffled_original_als
- **Size of downloaded dataset files:** 1.49 MB
- **Size of the generated dataset:** 5.30 MB
- **Total amount of disk used:** 6.78 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"De Nazionalpark hät e Flächi vo 170,3 km² und isch dodemit s grösti Naturschutzgebiet vo de Schwiz. Er ligt uf em Gebiet vo de ..."
}
```
#### unshuffled_original_am
- **Size of downloaded dataset files:** 102.79 MB
- **Size of the generated dataset:** 378.06 MB
- **Total amount of disk used:** 480.85 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"አየር መንገዱ ከአዲስ አበባ ወደ ሮም ጣሊያን በማምራት ላይ በነበረበት ጊዜ ረዳት አብራሪው የጉዞውን አቅጣጫ በመቀየር ጄኔቭ አውሮፓላን ማረፊያ በማሳረፍ እጁን ለፖሊስ ሰጥቷል።\\nየኢትዮጵያ መንግስት የ..."
}
```
#### unshuffled_original_an
- **Size of downloaded dataset files:** 0.15 MB
- **Size of the generated dataset:** 1.33 MB
- **Total amount of disk used:** 1.48 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"واااااااأسفاه الأمم تفتخر ب 0 أمي ووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووووو..."
}
```
#### unshuffled_original_ar
- **Size of downloaded dataset files:** 22.23 GB
- **Size of the generated dataset:** 87.94 GB
- **Total amount of disk used:** 110.17 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"مرحبا بك عزيز الزائر نتمنى لك أوقاتاً سعيدة معنا وأن نزداد شرفا بخدمتك ولا تنسى التسجيل معنا لتستفيد بكل جديد\\nأهلا وسهلا بك زا..."
}
```
#### unshuffled_original_arz
- **Size of downloaded dataset files:** 15.90 MB
- **Size of the generated dataset:** 70.13 MB
- **Total amount of disk used:** 86.03 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"بنى عجل : قبيلة من عجل بن لجيم بن صعب بن على بن بكر بن وائل انتقل اغلبهم الى البصرة فى العراق و اصفهان و خراسان فى ايران و اذرب..."
}
```
#### unshuffled_original_as
- **Size of downloaded dataset files:** 21.43 MB
- **Size of the generated dataset:** 117.73 MB
- **Total amount of disk used:** 139.17 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"আমি, এই সংগঠনৰ সদস্য সকলে একেলগ হৈ অসমকে ধৰি ভাৰতৰ উত্তৰ পূৰ্বাঞ্চলৰ অমূল্য কলা-সাংস্কৃতিক সম্পদৰাজি বৃহত্তৰ অষ্ট্ৰেলিয়াৰ সন্মু..."
}
```
#### unshuffled_original_ast
- **Size of downloaded dataset files:** 0.92 MB
- **Size of the generated dataset:** 2.54 MB
- **Total amount of disk used:** 3.46 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"The Killers llanzaron el so álbum debú, Hot Fuss, en xunu de 2004 nel Reinu Xuníu, al traviés de la discográfica Lizard King, y..."
}
```
#### unshuffled_original_av
- **Size of downloaded dataset files:** 0.08 MB
- **Size of the generated dataset:** 0.42 MB
- **Total amount of disk used:** 0.50 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Жинда малъараб ва божизе бегьулеб рагІудаса кьуризе бегьуларо гьев. Гьес насихІат гьабизе кколелъул бацІцІадаб диналъул рахъалъ..."
}
```
#### unshuffled_original_az
- **Size of downloaded dataset files:** 927.76 MB
- **Size of the generated dataset:** 2.96 GB
- **Total amount of disk used:** 3.89 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"AZTV-Artıq 7 ildir ki, Abşeron rayonu dotasiya almadan bütün xərclərini yerli daxilolmalar hesabına maliyyələşdirir.\\nDünən, 10..."
}
```
#### unshuffled_original_azb
- **Size of downloaded dataset files:** 6.64 MB
- **Size of the generated dataset:** 28.47 MB
- **Total amount of disk used:** 35.11 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"لعلی ١٣-جو عصرده یاشاییب یاراتمیش گؤرکملی آذربایجان شاعرلریندندیر. ١٢٢٤-جی ایلده تبریزده آنادان اولموشدور، گنج یاشلاریندا تیجار..."
}
```
#### unshuffled_original_ba
- **Size of downloaded dataset files:** 33.22 MB
- **Size of the generated dataset:** 133.70 MB
- **Total amount of disk used:** 166.92 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Күҙәтеү ҡуласаһы моделен хәҙер Мифтахетдин Аҡмулла исемендәге Башҡорт дәүләт педагогия университетында ла эшләргә мөмкин\\t\\nКүҙ..."
}
```
#### unshuffled_original_bar
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": " vo"
}
```
#### unshuffled_original_bcl
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"& ÿ ó / í 0 - ø û ù ö ú ð ï ú \\u0014 ù þ ô ö í ÷ ò \\u0014 ÷ í ù û ö í \\u0001 û ñ ç þ \\u0001 ð \\u0007 þ ò ñ ñ ò ô \\u0017 û ö ô ÷..."
}
```
#### unshuffled_original_be
- **Size of downloaded dataset files:** 498.29 MB
- **Size of the generated dataset:** 1.88 GB
- **Total amount of disk used:** 2.38 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Брэсцкія ўлады не дазволілі прафсаюзу РЭП правесці пікетаванне ў парку Воінаў-інтэрнацыяналістаў 30 мая 2018 года.\\nСітуацыю пр..."
}
```
#### unshuffled_original_bg
- **Size of downloaded dataset files:** 8.34 GB
- **Size of the generated dataset:** 33.75 GB
- **Total amount of disk used:** 42.09 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ЖАЛБОПОДАТЕЛЯТ директор на Дирекция „ Обжалване и данъчно-осигурителна практика“- Бургас, редовно призован, се представлява от ..."
}
```
#### unshuffled_original_bh
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.12 MB
- **Total amount of disk used:** 0.13 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"सुकमा जिला भारत के छत्तीसगढ़ राज्य में एगो जिला बाटे। एकर मुख्यालय सुकमा शहर बाटे। एकर कुल रकबा 5636 वर्ग कि॰मी॰ बाटे।\"..."
}
```
#### unshuffled_original_bn
- **Size of downloaded dataset files:** 2.14 GB
- **Size of the generated dataset:** 10.77 GB
- **Total amount of disk used:** 12.91 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ভড়ং সর্বস্ব বাংলা আর্ট অ্যান্ড কালচারের হিসাব গুলিয়ে দেওয়ার ম্যাজিকের নাম ব্রাত্য রাইসু November 23, 2017\\nভড়ং সর্বস্ব বাংলা আর..."
}
```
#### unshuffled_original_bo
- **Size of downloaded dataset files:** 28.94 MB
- **Size of the generated dataset:** 195.40 MB
- **Total amount of disk used:** 224.34 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"བོད་མི་འདི་དག་ནི་རང་རྒྱུད་སྒོ་རུ་ཕུད་དེ་གཞན་རྒྱུད་པང་དུ་ཉར་ནས་གསོ་སྐྱོང་བྱེད་དགོས་ཟེར་བ་དང་གཅིག་མཚུངས་རེད།\\nཚན་རིག་ནི་དང་ཐོག་རང..."
}
```
#### unshuffled_original_bpy
- **Size of downloaded dataset files:** 0.34 MB
- **Size of the generated dataset:** 4.35 MB
- **Total amount of disk used:** 4.69 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"পৌরসভা এহার আয়তন (লয়াহান) ২,৭৩০,.৬৩ বর্গ কিলোমিটার। পৌরসভা এহার মাপাহানর অক্ষাংশ বারো দ্রাঘিমাংশ ইলতাই 18.63° S 48.18° W ।[১]..."
}
```
#### unshuffled_original_br
- **Size of downloaded dataset files:** 9.18 MB
- **Size of the generated dataset:** 30.20 MB
- **Total amount of disk used:** 39.38 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Ar mank Magalhães(Daveoù a vank) a zo ur spesad evned, Spheniscus magellanicus an anv skiantel anezhañ.\\nGallout a reer implijo..."
}
```
#### unshuffled_original_bs
- **Size of downloaded dataset files:** 0.05 MB
- **Size of the generated dataset:** 0.48 MB
- **Total amount of disk used:** 0.53 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ž šř é ú šř šř ě šř ž é č ě ž ů ě ď éé ýš ě ě Ž č š ý ě ď é ýš ě ď ě éé ýš ě č ž ě š ý ď ě ýš é ú č ž č š ý ď ý ž é éě ď é č ýš..."
}
```
#### unshuffled_original_bxr
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.01 MB
- **Total amount of disk used:** 0.02 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"2002 оной хабар буряад хэлэ бэшэгэй һалбари Үндэһэтэнэй хүмүүнлиг ухаанай дээдэ һургуули болгогдожо өөршэлэгдөө.\\nХарин мүнөө б..."
}
```
#### unshuffled_original_ca
- **Size of downloaded dataset files:** 3.10 GB
- **Size of the generated dataset:** 8.62 GB
- **Total amount of disk used:** 11.73 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Daniel Vendrell, conegut com Vandrell, ha sigut un dels il•lustradors contemporanis més influents, representant a la nova onada..."
}
```
#### unshuffled_original_cbk
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano yo gano..."
}
```
#### unshuffled_original_ce
- **Size of downloaded dataset files:** 2.09 MB
- **Size of the generated dataset:** 8.73 MB
- **Total amount of disk used:** 10.82 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Шаьш анархисташ ду бохучу жигархойн дIахьедарехь дуьйцу, оьрсийн ницкъаллийн структурийн а, федералан каналан а Iалашонаш \\\"мар..."
}
```
#### unshuffled_original_ceb
- **Size of downloaded dataset files:** 11.07 MB
- **Size of the generated dataset:** 40.97 MB
- **Total amount of disk used:** 52.03 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Si Isko walay pupamilok nga nagtan-aw sa unahan, natugaw. “Naunsa ka gud diha Isko nga layo man kaayo ang imong panan-aw?” ni I..."
}
```
#### unshuffled_original_ckb
- **Size of downloaded dataset files:** 111.88 MB
- **Size of the generated dataset:** 510.97 MB
- **Total amount of disk used:** 622.85 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"رسی رۆژ - ساڵێک دوای بومەلەرزەی کرماشان میوانی بەرنامە : کاک سیاوەش حەیاتی چالاکی مەدەنی -قەسری شیرین\\nپارچە موزیک 30 / 10 / 20..."
}
```
#### unshuffled_original_cs
- **Size of downloaded dataset files:** 21.72 GB
- **Size of the generated dataset:** 57.08 GB
- **Total amount of disk used:** 78.80 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Akce anarchistů proti připravovanému novému služební řádu a nízkým mzdám 1903 – Historie českého anarchismu (1880 – 1939)\\nRost..."
}
```
#### unshuffled_original_cv
- **Size of downloaded dataset files:** 9.40 MB
- **Size of the generated dataset:** 41.05 MB
- **Total amount of disk used:** 50.45 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Шыранӑ чухне ӑнсӑртран латин кирилл саспаллисем вырӑнне латин саспаллисене ҫырсан, сайт эсир ҫырнине юсама тӑрӑшӗ.\\nКу сайтра ч..."
}
```
#### unshuffled_original_cy
- **Size of downloaded dataset files:** 81.74 MB
- **Size of the generated dataset:** 224.93 MB
- **Total amount of disk used:** 306.67 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Mae capeli Cymreig yr Andes ym Mhatagonia wedi cyhoeddi na fydd gwasanaethau yno weddill y mis, oherwydd yr eira trwm sydd wedi..."
}
```
#### unshuffled_original_da
- **Size of downloaded dataset files:** 6.00 GB
- **Size of the generated dataset:** 16.76 GB
- **Total amount of disk used:** 22.76 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Den 2.-5. februar 2016 løb det tredje kursus i uddannelsen af 4kommunesamarbejdets Local Impact Coaches, af stablen i Gentofte ..."
}
```
#### unshuffled_original_de
- **Size of downloaded dataset files:** 119.51 GB
- **Size of the generated dataset:** 331.22 GB
- **Total amount of disk used:** 450.73 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Auf dieser Seite gibt es mind. ein YouTube Video. Cookies für diese Website wurden abgelehnt. Dadurch können keine YouTube Vide..."
}
```
#### unshuffled_original_diq
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "Zıwanê Slawki, zıwano merdumanê Slawano. Zıwanê Slawki yew lızgeyê Zıwananê Hind u Ewropao. Keyeyê Zıwananê Slawki beno hirê letey:"
}
```
#### unshuffled_original_dsb
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.01 MB
- **Total amount of disk used:** 0.02 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Pśiklaskaju južo pśed pśedstajenim... 1500 źiśi njamóžo wěcej docakaś, měsćańska hala w Chóśebuzu - wupśedana."
}
```
#### unshuffled_original_dv
- **Size of downloaded dataset files:** 24.91 MB
- **Size of the generated dataset:** 131.63 MB
- **Total amount of disk used:** 156.54 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ބ. އަތޮޅުގައި ހުޅުވަން ތައްޔާރުވަމުން އަންނަ ވައްކަރު ރިސޯޓުގައި ވަޒީފާ އަދާކުރަން ޝައުގުވެރިވާ ފަރާތްތަކަށް ކުރިމަތިލުމުގެ ފުރ..."
}
```
#### unshuffled_original_el
- **Size of downloaded dataset files:** 17.31 GB
- **Size of the generated dataset:** 66.27 GB
- **Total amount of disk used:** 83.58 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Νεκρός εντοπίστηκε μέσα στο σπίτι του στην οδό Ηρώδου Αττικού στον αριθμό 7 ο επικεφαλής του προξενικού τμήματος της Ρωσικής πρ..."
}
```
#### unshuffled_original_eml
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.02 MB
- **Total amount of disk used:** 0.03 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"A séguit dal prucès ad rubutiśasiòṅ di abitànt dal pòpul ad Mikenes, Angoras 'l è finî dènt'r a 'n robot cun la tèsta dna rana ..."
}
```
#### unshuffled_original_en
- **Size of downloaded dataset files:** 903.83 GB
- **Size of the generated dataset:** 2525.44 GB
- **Total amount of disk used:** 3429.27 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Mtendere Village was inspired by the vision of Chief Napoleon Dzombe, which he shared with John Blanchard during his first visi..."
}
```
#### unshuffled_original_eo
- **Size of downloaded dataset files:** 117.07 MB
- **Size of the generated dataset:** 314.18 MB
- **Total amount of disk used:** 431.27 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Ĉu ... preĝi | mediti | ricevi instigojn || kanti | muziki || informiĝi | legi | studi || prepari Diservon\\nTemas pri kolekto d..."
}
```
#### unshuffled_original_es
- **Size of downloaded dataset files:** 106.04 GB
- **Size of the generated dataset:** 298.49 GB
- **Total amount of disk used:** 404.53 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Como se librará de la celulitis en el gimnasio La piel superflua en las manos después del adelgazamiento, Los bailes fáciles pa..."
}
```
#### unshuffled_original_et
- **Size of downloaded dataset files:** 1.88 GB
- **Size of the generated dataset:** 5.17 GB
- **Total amount of disk used:** 7.06 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"MTÜ AB Video järgib oma tegevuses kodanikuühenduste eetilise tegevuse üldtunnustatud põhimõtteid, mis on lühidalt kokkuvõetud 7..."
}
```
#### unshuffled_original_eu
- **Size of downloaded dataset files:** 248.19 MB
- **Size of the generated dataset:** 894.83 MB
- **Total amount of disk used:** 1.14 GB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "Gure jarduerek eraikuntzarekin, elkarbizitzarekin, hirigintzarekin eta ekologiarekin dute harremana, baita ideia eta konponbideak irudikatu eta garatzearekin ere, eraikuntza sektorea hobetuz, pertsonen erosotasuna eta bizi-kalitatea hobetzeko."
}
```
#### unshuffled_original_fa
- **Size of downloaded dataset files:** 20.96 GB
- **Size of the generated dataset:** 84.21 GB
- **Total amount of disk used:** 105.17 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"قـــــــــــــــــرار بود با هم کنـــــــــــــار بیایم نه اینکه از کنــــــــــــار هم رد بشیم...!!!\\nاگر روزی دلت لبریز غم بو..."
}
```
#### unshuffled_original_fi
- **Size of downloaded dataset files:** 9.97 GB
- **Size of the generated dataset:** 28.57 GB
- **Total amount of disk used:** 38.54 GB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Kiitos Deelle kaikesta - 1,5 viikkoa kulunut, kun Dee ei ole enää ollut omani. Reilu viikko sitten sunnuntaina vein Deen uuteen kotiinsa. Itselläni on ollut niin ristiriitaiset t..."
}
```
#### unshuffled_original_fr
- **Size of downloaded dataset files:** 105.32 GB
- **Size of the generated dataset:** 303.19 GB
- **Total amount of disk used:** 408.51 GB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "Média de débat d'idées, de culture et de littérature. Récits, décryptages, analyses, portraits et critiques autour de la vie des idées. Magazine engagé, ouvert aux autres et au monde.. Bring up to date in french"
}
```
#### unshuffled_original_frr
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Hiragana’ Practice’Sheet’1’(A -O)’ ’ Name:’________ __________________________’Section:’_______________ _’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ..."
}
```
#### unshuffled_original_fy
- **Size of downloaded dataset files:** 12.40 MB
- **Size of the generated dataset:** 36.24 MB
- **Total amount of disk used:** 48.64 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Nim in sêfte ride op Holmsjön, yn ien fan 'e lytse marren yn de omkriten, of nim se op avontueren lykas nonresidential. lâns Indalsälven wetter. Holm Sportklubb hawwe kano 's te huur, yn gearwurking mei de Baltyske Power konferinsje."
}
```
#### unshuffled_original_ga
- **Size of downloaded dataset files:** 29.27 MB
- **Size of the generated dataset:** 92.37 MB
- **Total amount of disk used:** 121.63 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Is fóram é seo chun plé a dhéanamh ar an leabhar atá roghnaithe do mhí na Samhna 2013 amháin. Ní féidir ach le baill chláraithe..."
}
```
#### unshuffled_original_gd
- **Size of downloaded dataset files:** 0.52 MB
- **Size of the generated dataset:** 2.02 MB
- **Total amount of disk used:** 2.55 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "Zhou Yujun, a 'phàrtaidh Rùnaire Comataidh Sgìre Yanfeng ann Hengyang bhaile agus a Sgìre pàrtaidh agus an riaghaltas a' bhuidheann-riochdachaidh a 'tighinn a chèilidh air ar companaidh air Apr. 14, 2017."
}
```
#### unshuffled_original_gl
- **Size of downloaded dataset files:** 235.38 MB
- **Size of the generated dataset:** 656.48 MB
- **Total amount of disk used:** 891.87 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"O persoal de Inditex da provincia de Pontevedra segue a reclamar iguais condicións laborais no conxunto do país - CIG: Confeder..."
}
```
#### unshuffled_original_gn
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.04 MB
- **Total amount of disk used:** 0.05 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"º ÑÆÚÓ À Ã Ð É Æ ¾ ÄÂ Î À ¼ Æ É ÄÛ = Ü Ý\\\"Þ ßà á â ã ä å æçè ã é ê â å àë ì æê íî é á ë ï í çì àð í Ü à ñ ê é ò ä ì\"..."
}
```
#### unshuffled_original_gom
- **Size of downloaded dataset files:** 0.44 MB
- **Size of the generated dataset:** 2.25 MB
- **Total amount of disk used:** 2.71 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"दुष्ट शीळ हें कौरवांचें । रामें सविस्तर देखूनि साचें । बोलिले वचनें जें दुर्वाचे । करी तयांचें अनुस्मरण ॥२२०॥\"..."
}
```
#### unshuffled_original_gu
- **Size of downloaded dataset files:** 232.02 MB
- **Size of the generated dataset:** 1.09 GB
- **Total amount of disk used:** 1.33 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"અધિક માસ ચાલે છે. સમગ્ર ભારતમાં અને તેમાંય ખાસ કરીને પવિત્ર કે ધાર્મિક કહેવાય છે તેવા સ્થાનક પર કથાનો દોર ચાલે છે. ઉનાળાની કાળઝ..."
}
```
#### unshuffled_original_he
- **Size of downloaded dataset files:** 5.66 GB
- **Size of the generated dataset:** 21.11 GB
- **Total amount of disk used:** 26.77 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"זקוקים לרשתות נגד יתושים? מחפשים רשת מתאימה לחלון צר וקטן? רשתות נגד יתושים אקורדיון של חברת קליר-מש הן הפתרון.\\nרשתות לחלונות ..."
}
```
#### unshuffled_original_hi
- **Size of downloaded dataset files:** 3.66 GB
- **Size of the generated dataset:** 17.93 GB
- **Total amount of disk used:** 21.59 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"'आइटम गर्ल' बनकर हिट हुई थीं राखी सावंत, आज करीना-कटरीना तक फॉलो कर रही हैं ट्रेंड नक्सलियों का दम निकालेगा बाइक ग्रेनेड लॉन्च..."
}
```
#### unshuffled_original_hr
- **Size of downloaded dataset files:** 79.42 MB
- **Size of the generated dataset:** 243.83 MB
- **Total amount of disk used:** 323.24 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"U raspravi je sudjelovao i HSS-ov saborski zastupnik rekavši kako poljoprivrednici ne osjete mjere o kojima ministar govori jer..."
}
```
#### unshuffled_original_hsb
- **Size of downloaded dataset files:** 1.39 MB
- **Size of the generated dataset:** 4.49 MB
- **Total amount of disk used:** 5.87 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Budyšin (SN/BŠe). Elektronikarjo mějachu lětsa cyle hinaši zazběh do swojeho wukubłanja. Wokrjesne rjemjeslnistwo bě mjenujcy w..."
}
```
#### unshuffled_original_ht
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan..."
}
```
#### unshuffled_original_hu
- **Size of downloaded dataset files:** 15.69 GB
- **Size of the generated dataset:** 43.07 GB
- **Total amount of disk used:** 58.77 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"monster - Amatőr, házi szex videók és kezdő csjaok pornó filmjei. - Free amateur, home made sex videos and online porn movies. ..."
}
```
#### unshuffled_original_hy
- **Size of downloaded dataset files:** 897.36 MB
- **Size of the generated dataset:** 3.94 GB
- **Total amount of disk used:** 4.84 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Արցախի Հանրապետության հռչակման 26-րդ տարեդարձի կապակցությամբ Շուշիի Արվեստի կենտրոնում կազմակերպվել է մոսկվաբնակ նկարիչներ՝ հայ..."
}
```
#### unshuffled_original_ia
- **Size of downloaded dataset files:** 0.08 MB
- **Size of the generated dataset:** 0.69 MB
- **Total amount of disk used:** 0.78 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha ha h..."
}
```
#### unshuffled_original_id
- **Size of downloaded dataset files:** 10.60 GB
- **Size of the generated dataset:** 32.32 GB
- **Total amount of disk used:** 42.91 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Perihal dari itu, kalau kunci hal yang demikian hilang, pemilik wajib melapor ke bengkel sah untuk dibuatkan kunci baru dengan ..."
}
```
#### unshuffled_original_ie
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.02 MB
- **Total amount of disk used:** 0.02 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "Plastic Yo Yo Metal Yo Yos Wooden Yo Yo Keychain Yo Yo Translucent Yo Yo Light Up Yo Yo Globe Yo Yo Stress Reliever Yo Yo Jellyfish Yo Yo Sports Ball Yo Yo Sound Yo Yo Miniature Yo Yo Promotional Yo Yo Novelty Yo Yo Video Game Yo Yo ECO Recycled Yo Yo"
}
```
#### unshuffled_original_ilo
- **Size of downloaded dataset files:** 0.27 MB
- **Size of the generated dataset:** 0.92 MB
- **Total amount of disk used:** 1.20 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Segun ken ni Ping-ay, ti yellow corn ti maysa kadagiti nadakamat a liberalized agricultural commodity iti daytoy a free trade k..."
}
```
#### unshuffled_original_io
- **Size of downloaded dataset files:** 0.04 MB
- **Size of the generated dataset:** 0.16 MB
- **Total amount of disk used:** 0.20 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Chekia esas parlamentala republiko. La chefo di stato esas la prezidanto. Til 2013 lu elektesis dal parlamento. Pos ta yaro, ol..."
}
```
#### unshuffled_original_is
- **Size of downloaded dataset files:** 533.03 MB
- **Size of the generated dataset:** 1.52 GB
- **Total amount of disk used:** 2.06 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Eyjar.net - upplýsinga- og fréttamiðill um Vestmannaeyjar - Fréttir - Nái núverandi stefna stjórnvalda fram að ganga mun það va..."
}
```
#### unshuffled_original_it
- **Size of downloaded dataset files:** 52.16 GB
- **Size of the generated dataset:** 147.38 GB
- **Total amount of disk used:** 199.54 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Jaundice - causes, treatment & pathology massaggio a osteochondrosis dellindizio di una controindicazione\\nTrattamento su un co..."
}
```
#### unshuffled_original_ja
- **Size of downloaded dataset files:** 79.56 GB
- **Size of the generated dataset:** 232.22 GB
- **Total amount of disk used:** 311.78 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"神社などへ一緒に同行して、様々な角度のショットで家族写真やお子様の写真を撮影致します!お好みに合わせて様々な写真を取ることができますので、その場でカメラマンへのリクエストも可能です!お子様の晴れ姿を、緊張していない自然な笑顔で残しませんか?\\n※七五三の..."
}
```
#### unshuffled_original_jbo
- **Size of downloaded dataset files:** 0.21 MB
- **Size of the generated dataset:** 0.77 MB
- **Total amount of disk used:** 0.98 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "ni'o 23 la cimast. cu 23moi djedi fi'o masti la cimast. noi ke'a cu cimoi masti .i 22 la cimast. cu purlamdei .ije 24 la cimast. cu bavlamdei"
}
```
#### unshuffled_original_jv
- **Size of downloaded dataset files:** 0.22 MB
- **Size of the generated dataset:** 0.69 MB
- **Total amount of disk used:** 0.91 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"José Mourinho (diwaca: [ʒuˈzɛ moˈɾiɲu]; lair ing Setubal, Portugal, 26 Januari 1963; umur 55 taun) iku salah siji pelatih bal k..."
}
```
#### unshuffled_original_ka
- **Size of downloaded dataset files:** 680.74 MB
- **Size of the generated dataset:** 3.77 GB
- **Total amount of disk used:** 4.45 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"წამიყვანე შენთან ერთად (ქართულად) / Возьми меня с собой (картулад) / (რუსული სერიალები ქართულად) (რუსების პორნო ონლაინში) (ruse..."
}
```
#### unshuffled_original_kk
- **Size of downloaded dataset files:** 615.06 MB
- **Size of the generated dataset:** 2.83 GB
- **Total amount of disk used:** 3.45 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Түлкібас ауданында «Латын негізді әліпби мен емле ережесі туралы насихат» жобасының тобы семинар өткізді\\nЕлорданың «Қазақстан»..."
}
```
#### unshuffled_original_km
- **Size of downloaded dataset files:** 193.28 MB
- **Size of the generated dataset:** 1.10 GB
- **Total amount of disk used:** 1.30 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ខ្សឹបដាក់ត្រចៀក៖ លោក សួស សុផានិត នាយផ្នែករដ្ឋបាលព្រៃឈើ ស្រុកភ្នំក្រវាញ់ ដែលទើបឡើងកាន់តំណែងថ្មី បើកដៃឲ្យឈ្នួញ ប្រព្រឹត្តបទល្មើស ..."
}
```
#### unshuffled_original_kn
- **Size of downloaded dataset files:** 342.15 MB
- **Size of the generated dataset:** 1.76 GB
- **Total amount of disk used:** 2.11 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ರಾಷ್ಟ್ರಪತಿ ಪ್ರಣಬ್ ಮುಖರ್ಜಿಯಿಂದ ಪದ್ಮ ಪ್ರಶಸ್ತಿ ಪ್ರದಾನ | President Pranab Mukherjee Confers Padma Awards | Photo Gallery on Kannada..."
}
```
#### unshuffled_original_ko
- **Size of downloaded dataset files:** 8.81 GB
- **Size of the generated dataset:** 25.29 GB
- **Total amount of disk used:** 34.10 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"CIA 프로젝트에서는 데이터베이스로 들어오는 요청을 중간에 수집(Sniffing)하고 수집한 데이터를 분석(Parsing)하여 그로 인한 결과를 판단하여 알릴 수 있는 시스템(Push Service)이 필요하다. 그리고 연구를 ..."
}
```
#### unshuffled_original_krc
- **Size of downloaded dataset files:** 0.66 MB
- **Size of the generated dataset:** 2.68 MB
- **Total amount of disk used:** 3.34 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Шамханланы, Бийлени къаршысына ябушуп, Батыр уланларыбызны къоллары булан «ортакъ ожакъ» къургъанбыз. Шо иш уллу зараллы иш бол..."
}
```
#### unshuffled_original_ku
- **Size of downloaded dataset files:** 33.38 MB
- **Size of the generated dataset:** 99.06 MB
- **Total amount of disk used:** 132.44 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Me di 114 bernameyên xwe yên berê da perçeyên ji berhemên zanyarî yên kurdzanên mezin bi wergera kurdî da ...\\nMe di 114 bernam..."
}
```
#### unshuffled_original_kv
- **Size of downloaded dataset files:** 0.40 MB
- **Size of the generated dataset:** 2.38 MB
- **Total amount of disk used:** 2.78 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Коми кытшыслӧн ыджытжык тор вӧр увтын куйлӧ, сійӧн и фаунасӧ татӧн аркмӧтӧны вӧрын олісь подаэз. Ассямаӧн лоӧ сія, мый кытшас с..."
}
```
#### unshuffled_original_kw
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.04 MB
- **Total amount of disk used:** 0.05 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼Pray without ceasing🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏🏼🙏..."
}
```
#### unshuffled_original_ky
- **Size of downloaded dataset files:** 152.64 MB
- **Size of the generated dataset:** 630.79 MB
- **Total amount of disk used:** 783.43 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Turmush: Бишкек шаардык кеңешинин кезексиз отурумунда мэрге ишенбөөчүлүк көрсөтүү маселеси каралат, - депутат Т.Сагынов\\nБишкек..."
}
```
#### unshuffled_original_la
- **Size of downloaded dataset files:** 5.46 MB
- **Size of the generated dataset:** 27.80 MB
- **Total amount of disk used:** 33.26 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Hæ sunt generationes Noë: Noë vir justus atque perfectus fuit in generationibus suis; cum Deo ambulavit.\\nEcce ego adducam aqua..."
}
```
#### unshuffled_original_lb
- **Size of downloaded dataset files:** 10.73 MB
- **Size of the generated dataset:** 30.60 MB
- **Total amount of disk used:** 41.32 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Während dem Gaardefestival \\\"Ambiance Jardins\\\" vum 15. bis de 17. Mee huet den SNJ nees zesumme mam Groupe Animateur en Inform..."
}
```
#### unshuffled_original_lez
- **Size of downloaded dataset files:** 0.83 MB
- **Size of the generated dataset:** 3.38 MB
- **Total amount of disk used:** 4.20 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Ахцегь хуьр, виридалай ч1ехи лезги хуьрерикая я. Ам Урусатдин виридалай къиблепатавай хуьрерикай я. Ин хуьр...\"..."
}
```
#### unshuffled_original_li
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.03 MB
- **Total amount of disk used:** 0.04 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"'t Good Goedenraad aan de Ezerbaek besjteit oet 'n kesjtièl mèt gesjlote haof en 'n park van 26 hectare. Hie in sjtoon väól beu..."
}
```
#### unshuffled_original_lmo
- **Size of downloaded dataset files:** 0.10 MB
- **Size of the generated dataset:** 0.47 MB
- **Total amount of disk used:** 0.58 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Serét (en tortonés: Sregh; en piemontés: Srèj) l'è 'n cümü italià, de la regiù del Piemónt, en Pruvìncia de Alessandria. El g'h..."
}
```
#### unshuffled_original_lo
- **Size of downloaded dataset files:** 33.92 MB
- **Size of the generated dataset:** 182.36 MB
- **Total amount of disk used:** 216.28 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"ຜູ້ພິພາກສາ ປະຈຳເຂດ ສຫລ ທ່ານນຶ່ງ ຕັດສິນວ່າ ໂຄງການເກັບກຳຂໍ້ມູນ ທາງໂທລະສັບ ຂອງອົງການ ຄວາມໝັ້ນຄົງແຫ່ງຊາດ ແມ່ນຖືກຕ້ອງ ຕາມກົດໝາຍ.\\nກະ..."
}
```
#### unshuffled_original_lrc
- **Size of downloaded dataset files:** 0.02 MB
- **Size of the generated dataset:** 0.07 MB
- **Total amount of disk used:** 0.09 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"آرلینگتون یئ گئل د شأریا ڤولاتچە ڤیرجینیا و یئ گئل د شأریا ڤولات ڤولاتچە یا یأکاگئرئتە ئمریکاە. ئی شأر دویومی کألوٙن شأر د راسا..."
}
```
#### unshuffled_original_lt
- **Size of downloaded dataset files:** 3.44 GB
- **Size of the generated dataset:** 9.45 GB
- **Total amount of disk used:** 12.89 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Čir vir vir pavasaris! Čia čia čia… dalinamės labai simpatiška video pamokėle, kurią pristato ab888art galerija.\\nBe galo papra..."
}
```
#### unshuffled_original_lv
- **Size of downloaded dataset files:** 1.49 GB
- **Size of the generated dataset:** 4.27 GB
- **Total amount of disk used:** 5.75 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Dekoratīvi sliekšņi MITSUBISHI OUTLANDER 2007, izgatavoti no ovālas formas, pulētas nerūsējošā tērauda caurules...\\ndažādas tūn..."
}
```
#### unshuffled_original_mai
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.33 MB
- **Total amount of disk used:** 0.34 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"१ · २ · ३ · ४ · ५ · ६ · ७ · ८ · ९ · १० · ११ · १२ · १३ · १४ · १५ · १६ · १७ · १८ · १९ · २० · २१ · २२ · २३ · २४ · २५ · २६ · २७ · २..."
}
```
#### unshuffled_original_mg
- **Size of downloaded dataset files:** 6.22 MB
- **Size of the generated dataset:** 21.79 MB
- **Total amount of disk used:** 28.01 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Nanamboatra taratasy apetaka sy soso-kevitra ho an'ny olona te-hanatevin-daharana ity fihetsiketsehana ity i Anocrena.\\nNosorat..."
}
```
#### unshuffled_original_mhr
- **Size of downloaded dataset files:** 1.84 MB
- **Size of the generated dataset:** 7.55 MB
- **Total amount of disk used:** 9.38 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Акрет жап годым Уганда кундемым Пигмей племена- влак айлен шогеныт. мемнан эран 1 курым гыч Банту племена влакат тиде кундемышк..."
}
```
#### unshuffled_original_min
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.63 MB
- **Total amount of disk used:** 0.64 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\" ..."
}
```
#### unshuffled_original_mk
- **Size of downloaded dataset files:** 508.24 MB
- **Size of the generated dataset:** 2.20 GB
- **Total amount of disk used:** 2.71 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"„Филм плус“ е насловен првиот филмски месечник во Македонија, чиј прв број ќе биде промовиран вечер во „Менада“. Новото македон..."
}
```
#### unshuffled_original_ml
- **Size of downloaded dataset files:** 938.69 MB
- **Size of the generated dataset:** 5.24 GB
- **Total amount of disk used:** 6.18 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"സ്ത്രീ പ്രവേശനം സര്ക്കാര് പൂര്ണമായും അംഗീകരിക്കുന്നുവെന്നും ശബരിമലയുടെ സുരക്ഷയില് ഇടപെടുമെന്നും സര്ക്കാര് ഹൈക്കോടതിയില്\\..."
}
```
#### unshuffled_original_mn
- **Size of downloaded dataset files:** 472.36 MB
- **Size of the generated dataset:** 2.33 GB
- **Total amount of disk used:** 2.81 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Монгол улс, Улаанбаатар хот - 14191 Энхтайваны өргөн чөлөө - 10, Багш хөгжлийн ордон, Багшийн мэргэжил дээшлүүлэх институт\\nБаг..."
}
```
#### unshuffled_original_mr
- **Size of downloaded dataset files:** 525.31 MB
- **Size of the generated dataset:** 2.82 GB
- **Total amount of disk used:** 3.34 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Home / motivational marathi story / उद्योजकता (Entrepreneurship) / यांना हे जमलय, तर आपल्याला का नाही जमणार ?\\nयापैकी कोणाचीही ..."
}
```
#### unshuffled_original_mrj
- **Size of downloaded dataset files:** 0.30 MB
- **Size of the generated dataset:** 1.16 MB
- **Total amount of disk used:** 1.47 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Лӹпӹвлӓ (латинлӓ Lepidoptera ; алыкмарла лыве-влак) — капшангывлӓ йыхыш пырышы сӱмӓн нӹл шылдыран капшангывлӓ. Цилӓжӹ 180000 тӹ..."
}
```
#### unshuffled_original_ms
- **Size of downloaded dataset files:** 28.46 MB
- **Size of the generated dataset:** 122.33 MB
- **Total amount of disk used:** 150.79 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Sanad pertama daripada Zuhair bin Harb daripada ‘Affan daripada Hammad daripada Thabit daripada Anas.\\nSanad kedua daripada ‘Ab..."
}
```
#### unshuffled_original_mt
- **Size of downloaded dataset files:** 7.53 MB
- **Size of the generated dataset:** 24.47 MB
- **Total amount of disk used:** 32.00 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "tibgħat il-kawża lura lill-Qorti Ġenerali għall-annullament jew għat-tnaqqis tal-penalità imposta mill-Kummissjoni bid-deċiżjoni inizjali kif emendata bid-deċiżjoni ta’ rettifika;"
}
```
#### unshuffled_original_mwl
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Deciplina social i outónoma que angloba atebidades de ouserbaçon, de análeze, de çcriçon, cumparaçon, de sistematizaçon i de sp..."
}
```
#### unshuffled_original_my
- **Size of downloaded dataset files:** 369.85 MB
- **Size of the generated dataset:** 2.02 GB
- **Total amount of disk used:** 2.39 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ျမ၀တီ - ရန္ကုန္တိုင္းေဒသႀကီး ေျမာက္ဥကၠလာပႏွင္႕ ဗဟန္းၿမိဳ႔နယ္ မေကြးတိုင္း ေဒသႀကီး ပခုကၠဴၿမိဳ႔နယ္တို႔၌ ျမန္မာ႕တပ္မေတာ္အား ေထာက္ခံ..."
}
```
#### unshuffled_original_myv
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"2018 иень умарьковонь 6-це чистэ сась паро куля! Россиянь культурань Министерствась макссь невтемань конёв (прокатной удостовер..."
}
```
#### unshuffled_original_mzn
- **Size of downloaded dataset files:** 0.18 MB
- **Size of the generated dataset:** 0.72 MB
- **Total amount of disk used:** 0.90 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"قرآن یا قوران اسلام ِآسمونی کتاب هسته. مسلمونون گانّّه قرآن ره خدا، وحی جه برسنییه، «محمد معجزه» هسته و ثقلین حدیث دله ونه خَو..."
}
```
#### unshuffled_original_nah
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.01 MB
- **Total amount of disk used:** 0.01 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "In mācuīlpōhualxihuitl VI (inic chicuacē) in mācuīlpōhualli xiuhitl cāhuitl īhuīcpa 501 xihuitl oc 600 xihuitl."
}
```
#### unshuffled_original_nap
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.02 MB
- **Total amount of disk used:** 0.02 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ò AUDIT í Ç è î ÿ å å 30 ò ÿ ÿ é, õ ñ ì ÿ, ê ã- ò à ì. å â å í ç â à à é ñ è å é ó ó ë. å å å û è å î é è à. à è à AUDIT 1-7 â ..."
}
```
#### unshuffled_original_nds
- **Size of downloaded dataset files:** 6.74 MB
- **Size of the generated dataset:** 18.23 MB
- **Total amount of disk used:** 24.99 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Dor kann sik vun nu af an de hele plattdüütsche Welt – vun Niebüll bit New York, vun Helgoland bit Honolulu – drapen. Allens, w..."
}
```
#### unshuffled_original_ne
- **Size of downloaded dataset files:** 355.29 MB
- **Size of the generated dataset:** 1.87 GB
- **Total amount of disk used:** 2.22 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"बर्दिबास नगरपालिकाको तेस्रो नगर परिषदबाट पारित आ.व.२०७३।७४ को संशोधित र २०७४।७५ को प्रस्तावित नीति, कार्यक्रम तथा बजेट\\nअार्थिक..."
}
```
#### unshuffled_original_new
- **Size of downloaded dataset files:** 1.03 MB
- **Size of the generated dataset:** 5.77 MB
- **Total amount of disk used:** 6.79 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"थ्व शहरयागु अक्षांश ३४.७००१६४ उत्तर व देशान्तर ८६.३७६४६९ पश्चिम खः (34.700164° N 86.376469° W)। थ्व थासे ७२२६७३२ वर्ग मिटर (२.७..."
}
```
#### unshuffled_original_nl
- **Size of downloaded dataset files:** 29.35 GB
- **Size of the generated dataset:** 83.23 GB
- **Total amount of disk used:** 112.58 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Op vrijdag 31 augustus wordt het nieuwe studiejaar van de masteropleiding architectuur geopend met een dagexcursie naar Venlo.\\..."
}
```
#### unshuffled_original_nn
- **Size of downloaded dataset files:** 32.86 MB
- **Size of the generated dataset:** 90.84 MB
- **Total amount of disk used:** 123.70 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "Planomtale krav til innhald Bakgrunn: Spørsmål frå fleire kommunar om kva ein planomtale/planbeskrivelse bør innehalde Fylkeskommunen og fylkesmannen har i ein del saker reist motsegn på formelt grunnlag"
}
```
#### unshuffled_original_no
- **Size of downloaded dataset files:** 3.11 GB
- **Size of the generated dataset:** 8.65 GB
- **Total amount of disk used:** 11.76 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Ytterligere aktører i primærhelsetjenesten og andre NHS-virksomheter ble infisert, inkludert legekontor.Læreren vår er så attra..."
}
```
#### unshuffled_original_oc
- **Size of downloaded dataset files:** 1.57 MB
- **Size of the generated dataset:** 6.12 MB
- **Total amount of disk used:** 7.71 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": ".рф (rf, còdi punycode: .xn--p1ai)[1] es lo nom de domeni en rus per Russia. Foguèt activat lo 12 de mai de 2010. Lo còdi latin es .ru."
}
```
#### unshuffled_original_or
- **Size of downloaded dataset files:** 49.84 MB
- **Size of the generated dataset:** 260.15 MB
- **Total amount of disk used:** 309.99 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ଭୁବନେଶ୍ୱର, ୨୭/୧– (ଓଡ଼ିଆ ପୁଅ) ସିପିଆଇ ଜାତୀୟ ପରିଷଦର ଆହ୍ୱାନକ୍ରମେ ଗତକାଲି ଜାନୁୟାରୀ ୨୬ ସାଧାରଣତନ୍ତ୍ର ଦିବସକୁ ଦେଶ ବ୍ୟାପୀ ସମ୍ବିଧାନ ସୁରକ୍ଷା ..."
}
```
#### unshuffled_original_os
- **Size of downloaded dataset files:** 3.09 MB
- **Size of the generated dataset:** 12.90 MB
- **Total amount of disk used:** 15.99 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"1. Лæппу æмæ чызг казрæдзийы зæрдæмæ куы фæцæуынц æмæ, куы сфæнд кæнынц сæ цард баиу кæнын, уæд лæппу бар ракуры чызгæй, цæмæй ..."
}
```
#### unshuffled_original_pa
- **Size of downloaded dataset files:** 164.21 MB
- **Size of the generated dataset:** 801.16 MB
- **Total amount of disk used:** 965.37 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ਰਜਿ: ਨੰ: PB/JL-138/2018-20 ਜਿਲਦ 63, ਬਾਨੀ ਸੰਪਾਦਕ (ਸਵ:) ਡਾ: ਸਾਧੂ ਸਿੰਘ ਹਮਦਰਦ ਫ਼ੋਨ : 0181-2455961-62-63, 5032400, ਫੈਕਸ : 2455960, 2..."
}
```
#### unshuffled_original_pam
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Áku pu i Anak ning Aláya at ngeni ipákit kó kékayu ngan nûng makanánu lang susúlat détinang kulit a mágkas. Lauan ya ing tarátu..."
}
```
#### unshuffled_original_pl
- **Size of downloaded dataset files:** 42.88 GB
- **Size of the generated dataset:** 117.12 GB
- **Total amount of disk used:** 160.01 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"System informatyczny - Załącznik nr 1 do zarządzenia Wójta Gminy Podegrodzie Nr 530/2013 z dnia 27 maja 2013 r\\nSystem informat..."
}
```
#### unshuffled_original_pms
- **Size of downloaded dataset files:** 0.75 MB
- **Size of the generated dataset:** 2.15 MB
- **Total amount of disk used:** 2.92 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Louvigné-du-Désert a l'é na comun-a fransèisa ant la region aministrativa dla Brëtagna, ant ël dipartiment d'Ille-et-Vilaine. A..."
}
```
#### unshuffled_original_pnb
- **Size of downloaded dataset files:** 3.22 MB
- **Size of the generated dataset:** 12.04 MB
- **Total amount of disk used:** 15.26 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"ایہ فائل Wikimedia Commons توں اے تے دوجیاں ویونتاں تے وی ورتی جاےکدی اے۔ گل بات اس دے فائل گل بات صفہ تے تھلے دتی گئی۔\"..."
}
```
#### unshuffled_original_ps
- **Size of downloaded dataset files:** 103.66 MB
- **Size of the generated dataset:** 379.51 MB
- **Total amount of disk used:** 483.17 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Many people usually use the time period ‘business to business (B2B) advertising,’ however most of them do not know precisely wh..."
}
```
#### unshuffled_original_pt
- **Size of downloaded dataset files:** 47.26 GB
- **Size of the generated dataset:** 132.64 GB
- **Total amount of disk used:** 179.89 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Você pode estar lendo este texto no sofá, levantar pra pegar uma breja na geladeira, dar uma cagada e sentar novamente, sem int..."
}
```
#### unshuffled_original_qu
- **Size of downloaded dataset files:** 0.02 MB
- **Size of the generated dataset:** 0.08 MB
- **Total amount of disk used:** 0.10 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Warayu wichay (kastilla simipi: Ascensión de Guarayos) nisqaqa Buliwya mama llaqtapi, Santa Krus suyupi, huk llaqtam, Warayu pruwinsyap uma llaqtanmi."
}
```
#### unshuffled_original_rm
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.01 MB
- **Total amount of disk used:** 0.01 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"practicists agrars / practicistas agraras AFP pon far ina furmaziun da basa scursanida per cuntanscher in attestat federal da q..."
}
```
#### unshuffled_original_ro
- **Size of downloaded dataset files:** 9.53 GB
- **Size of the generated dataset:** 26.87 GB
- **Total amount of disk used:** 36.40 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"“În viață, oportunitatea nu este totul. Cine atrage Lumina, cineva bun în umbră. Timpul ne creează.” maestru\\nLyn.Evans: Ce mar..."
}
```
#### unshuffled_original_ru
- **Size of downloaded dataset files:** 319.76 GB
- **Size of the generated dataset:** 1241.63 GB
- **Total amount of disk used:** 1561.38 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Доступ к данному профилю для публичного просмотра закрыт администрацией сайта - профиль находится на модерации.\\nРазработчикам ..."
}
```
#### unshuffled_original_sa
- **Size of downloaded dataset files:** 17.52 MB
- **Size of the generated dataset:** 97.06 MB
- **Total amount of disk used:** 114.58 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"अनिरुद्धनगरे क्रीडिता रामलीला सम्प्रति समाप्ता अस्ति । तस्य कानिचन् चित्राणि पूर्वमेव प्रकाशितानि सन्ति । द्वौ चलचित्रौ अपि ..."
}
```
#### unshuffled_original_sah
- **Size of downloaded dataset files:** 9.08 MB
- **Size of the generated dataset:** 43.82 MB
- **Total amount of disk used:** 52.90 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████..."
}
```
#### unshuffled_original_scn
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
{
"id": 0,
"text": "La gilusìa è nu sintimentu dulurusu ca nasci d'un disideriu di pussessu sclusivu ntê cunfrunti dâ pirsuna amata e dû timuri, dû suspettu o dâ cirtizza dâ sò nfidiltati."
}
```
#### unshuffled_original_sd
- **Size of downloaded dataset files:** 90.62 MB
- **Size of the generated dataset:** 364.25 MB
- **Total amount of disk used:** 454.88 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"هر ڪو ڄاڻي ٿو ته جڏهن توهان هڪ وڏي خريد ڪرڻ چاهيون ٿا, توهان پڄي ضروري حڪم ۾ ان جي ڪم ڪرڻ جي هٿ ۾ لاڳاپو ڪيو آهي. جي شيء آهي ته..."
}
```
#### unshuffled_original_sh
- **Size of downloaded dataset files:** 3.46 MB
- **Size of the generated dataset:** 25.84 MB
- **Total amount of disk used:** 29.30 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Opština Gornja Radgona se nalazi u sjeveroistočnoj Sloveniji i graniči s susjednom Austriji duž rijeke Mure. Sa tridesetim nase..."
}
```
#### unshuffled_original_si
- **Size of downloaded dataset files:** 310.93 MB
- **Size of the generated dataset:** 1.47 GB
- **Total amount of disk used:** 1.78 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"ලාංකීය සිතිවිලි සිංහල බ්ලොග් කියවනය කොත්තු සින්ඩිය ලංකා Blogger හත්මාළුව ලංකා බ්ලොග් කියවනය මාතලන්ගේ සින්ඩිය මොබයිල්lk\\nඅවකාශය ..."
}
```
#### unshuffled_original_sk
- **Size of downloaded dataset files:** 3.71 GB
- **Size of the generated dataset:** 9.81 GB
- **Total amount of disk used:** 13.52 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Aktivity | Agentúra podporovaného zamestnávania | vzdelávanie pre klientov, vzdelávanie pre odborníkov, kurzy\\nŠpecializované k..."
}
```
#### unshuffled_original_sl
- **Size of downloaded dataset files:** 956.20 MB
- **Size of the generated dataset:** 2.68 GB
- **Total amount of disk used:** 3.63 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Če Creatures, ki je želel, da pridejo na čas, predvsem je povedlo – razlikuje od ljubosumja začel grizenja kolen (ali zadnjica)..."
}
```
#### unshuffled_original_so
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.06 MB
- **Total amount of disk used:** 0.06 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"тттттттттттттттттттттттттттттттт тттттттттттттттттттттттттттттттт тттттттттттттттттттттттттттттттт ттттттттттттттттуууууууууууу..."
}
```
#### unshuffled_original_sq
- **Size of downloaded dataset files:** 861.84 MB
- **Size of the generated dataset:** 2.44 GB
- **Total amount of disk used:** 3.30 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Çfarë do të më pëlqente tek një femër ose çfarë do të më shndërronte në një shpërthim drite? – Albert Vataj\\nTë gjithëve një zo..."
}
```
#### unshuffled_original_sr
- **Size of downloaded dataset files:** 1.08 GB
- **Size of the generated dataset:** 4.13 GB
- **Total amount of disk used:** 5.21 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Корисни савети за сваки дан. На сајту су разне категорије, као што су љепота, мода, кување и поправка властитим рукама.\\nШколск..."
}
```
#### unshuffled_original_su
- **Size of downloaded dataset files:** 0.06 MB
- **Size of the generated dataset:** 0.23 MB
- **Total amount of disk used:** 0.28 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Kartu krédit nyaéta \"duit plastik\" anu dikaluarkeun ku bank pikeun alat pambayaran di tempat-tempat nu tangtu samisal jiga di hotél, réstoran, tempat rékréasi jeung sajabana.[1]"
}
```
#### unshuffled_original_sv
- **Size of downloaded dataset files:** 17.18 GB
- **Size of the generated dataset:** 47.00 GB
- **Total amount of disk used:** 64.18 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"1783 är ett viktigt årtal i den nya tidens historia. Det året slöts en fred i Paris och därmed blev de 13 brittiska kolonierna ..."
}
```
#### unshuffled_original_sw
- **Size of downloaded dataset files:** 3.71 MB
- **Size of the generated dataset:** 14.07 MB
- **Total amount of disk used:** 17.78 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Miripuko hiyo inakuja mwanzoni mwa Wiki Takatifu kuelekea Pasaka na ikiwa ni wiki chache tu kabla ya Papa Francis kuanza ziara yake katika nchi hiyo yenye idadi kubwa kabisa ya watu katika ulimwengu wa nchi za Kiarabu."
}
```
#### unshuffled_original_ta
- **Size of downloaded dataset files:** 1.74 GB
- **Size of the generated dataset:** 9.93 GB
- **Total amount of disk used:** 11.67 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"பொழுது சாய்ந்து வெகு நேரமாகிவிட்டது. கூலி வேலைக்குப் போயிருந்த 'சித்தாள் ' பெண்கள் எல்லோரும் வீடு திரும்பி விட்டார்கள். இன்னும்..."
}
```
#### unshuffled_original_te
- **Size of downloaded dataset files:** 522.47 MB
- **Size of the generated dataset:** 2.61 GB
- **Total amount of disk used:** 3.13 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"హర్యానాలో టోల్ దగ్గర సిబ్బంది.. స్థానిక ప్రజలు కొట్టుకున్నారు. కర్నాల్ అనే గ్రామానికి సమీపంలో టోల్ గేట్ ఉంది. అయితే సాధారణంగా స..."
}
```
#### unshuffled_original_tg
- **Size of downloaded dataset files:** 90.97 MB
- **Size of the generated dataset:** 397.43 MB
- **Total amount of disk used:** 488.41 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Ҳумайро гуфтааст, мухолифи низом аст, низоме, ки дар Тоҷикистон вуҷуд дорад. Ба ин маънӣ, худро мухолифи давлату ҳукумати Тоҷик..."
}
```
#### unshuffled_original_th
- **Size of downloaded dataset files:** 7.38 GB
- **Size of the generated dataset:** 38.29 GB
- **Total amount of disk used:** 45.67 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ฟันที่แลดูขาวสะอาดไม่มีเศษอาหารติดอยู่ เหงือกสีชมพู ไม่เจ็บ หรือมีเลือดออกเวลาแปรงฟันหรือขัดฟัน ไม่มีปัญหาเรื่องกลิ่นปาก ทำให้ก..."
}
```
#### unshuffled_original_tk
- **Size of downloaded dataset files:** 2.96 MB
- **Size of the generated dataset:** 10.66 MB
- **Total amount of disk used:** 13.62 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"Türkmenistanyň Prezidenti agyr atletika boýunça dünýä çempionatyna taýýarlyk işleriniň barşy bilen tanyşdy\\nHalallykdan kemal t..."
}
```
#### unshuffled_original_tl
- **Size of downloaded dataset files:** 204.89 MB
- **Size of the generated dataset:** 606.30 MB
- **Total amount of disk used:** 811.19 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"“Gusto ko manawagan sa mga Unit Head ng Chanel 2 Salve. Kasi napapansin ko iyon mga alaga ko ang taping halos once a week lang,..."
}
```
#### unshuffled_original_tr
- **Size of downloaded dataset files:** 21.96 GB
- **Size of the generated dataset:** 63.58 GB
- **Total amount of disk used:** 85.54 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Son yıllarda görülen ay tutulmalarına göre daha etkili olacağı söylenen Kanlı veya Kırmızı Ay Tutulmasına saatler kaldı. Bu akş..."
}
```
#### unshuffled_original_tt
- **Size of downloaded dataset files:** 151.06 MB
- **Size of the generated dataset:** 703.42 MB
- **Total amount of disk used:** 854.47 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"\\\"Иремнең вафатына 40 көн узгач, Алмаз да безнең өйгә кереп үлде\\\". Арчада 35 яшьлек ир өстенә кондызлар ега башлаган агач төшк..."
}
```
#### unshuffled_original_tyv
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.01 MB
- **Total amount of disk used:** 0.01 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Экии, хүндүлуг аалчылар болгаш тыва дылдың деткикчилери! Тыва дылдың болгаш чогаалдың ховар бир башкызынга, Менги Ооржакка, ажы..."
}
```
#### unshuffled_original_ug
- **Size of downloaded dataset files:** 27.92 MB
- **Size of the generated dataset:** 127.42 MB
- **Total amount of disk used:** 155.35 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"زاڭ-ءتۇزىم | عىلىم-تەحنيكا | ءتىل-ادەبيەت | تۇرمىس | دەنە تاربيە | ساياحات-ورتا | سۋرەتتى حابار | سىر سۇحبات | ارناۋلى تاقىرىپ ..."
}
```
#### unshuffled_original_uk
- **Size of downloaded dataset files:** 14.42 GB
- **Size of the generated dataset:** 56.44 GB
- **Total amount of disk used:** 70.86 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Про надання роз'яснення (щодо форми письмового зобов'язання громадян про зворотне ввезення/вивезення товарів), Державна митна с..."
}
```
#### unshuffled_original_ur
- **Size of downloaded dataset files:** 712.61 MB
- **Size of the generated dataset:** 2.80 GB
- **Total amount of disk used:** 3.51 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"آئیے اہم اسلامی کتب کو یونیکوڈ میں انٹرنیٹ پر پیش کرنے کے لئے مل جل کر آن لائن ٹائپنگ کریں۔ محدث ٹائپنگ پراجیکٹ کے ذریعے آپ روز..."
}
```
#### unshuffled_original_uz
- **Size of downloaded dataset files:** 5.78 MB
- **Size of the generated dataset:** 21.46 MB
- **Total amount of disk used:** 27.24 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Qurama tog'lari tizmasining Toshkentdan 154 km uzoqlikdagi Toshkent-Ush yo'li yeqasidaxushmanzara tabiat qo'ynida joylashgan maydoni 30 ga.\nBolalarni sog'lomlashtirish oromgohi Bo'stonliq tumani Oqtosh muntaqasining soy-salqin gushasida joylashgan."
}
```
#### unshuffled_original_vec
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.02 MB
- **Total amount of disk used:** 0.03 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Par ogni pónto, ła derivada ła xe ła pendensa de ła reta tangente a ła curva de ła funsion f. Ła reta de cołor róso l'è senpre ..."
}
```
#### unshuffled_original_vi
- **Size of downloaded dataset files:** 21.50 GB
- **Size of the generated dataset:** 72.23 GB
- **Total amount of disk used:** 93.73 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Canh chua cá bông lau không chỉ là món ăn giải nhiệt, thanh mát ngày hè mà còn là món siêu bổ dưỡng, rất tốt cho người gầy ốm. ..."
}
```
#### unshuffled_original_vo
- **Size of downloaded dataset files:** 0.30 MB
- **Size of the generated dataset:** 2.12 MB
- **Total amount of disk used:** 2.42 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Sarniguet binon zif in ziläk: Hautes-Pyrénées, in topäd: Midi-Pyrénées, in Fransän. Sarniguet topon videtü 43°19’ 7’’ N e lunetü 0°5’ 19’’ L."
}
```
#### unshuffled_original_wa
- **Size of downloaded dataset files:** 0.09 MB
- **Size of the generated dataset:** 0.29 MB
- **Total amount of disk used:** 0.38 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "Cisse pådje ci n' est co k' on djermon, dj' ô bén k' el pådje est djusse sibåtcheye, eyet co trop tene; et s' divreut ele ecråxhî ene miete."
}
```
#### unshuffled_original_war
- **Size of downloaded dataset files:** 0.64 MB
- **Size of the generated dataset:** 2.68 MB
- **Total amount of disk used:** 3.32 MB
An example of 'train' looks as follows.
```
{
"id": 1,
"text": "An Honce amo in usa ka baryo ngan munisipalidad ha distrito han Rožňava ha rehiyon han Košice ha nasod han Slovakia.\nAn Rumegies amo in usa ka komyun ha departamento han Nord ngan ha rehiyon han Nord-Pas-de-Calais ha nasod han Fransya."
}
```
#### unshuffled_original_wuu
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.12 MB
- **Total amount of disk used:** 0.13 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"伊春元旦天气 伊春腊八天气 伊春春节天气 伊春情人节天气 伊春元宵节天气 伊春愚人节天气 伊春清明节天气 伊春劳动节天气 伊春母亲节天气 伊春端午节天气 伊春七夕节天气 伊春教师节天气 伊春中秋节天气 伊春国庆节天气 伊春重阳节天气 伊春万圣节天气 伊春..."
}
```
#### unshuffled_original_xal
- **Size of downloaded dataset files:** 0.03 MB
- **Size of the generated dataset:** 0.12 MB
- **Total amount of disk used:** 0.15 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Арнгудин Орн гисн Европд бәәдг һазр. 2007 җилин тooһaр эн орн нутгт 3,600,523 әмтн бәәдг билә. Арнгудин Орнин хотл балһсна нерн..."
}
```
#### unshuffled_original_xmf
- **Size of downloaded dataset files:** 1.05 MB
- **Size of the generated dataset:** 6.12 MB
- **Total amount of disk used:** 7.17 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"მოჩამილი ტექსტი წჷმორინელი რე Creative Commons Attribution-ShareAlike ლიცენზიათ; შილებე გეძინელი პირობეფიშ არსებუა. კილიშკილიშა..."
}
```
#### unshuffled_original_yi
- **Size of downloaded dataset files:** 33.33 MB
- **Size of the generated dataset:** 147.60 MB
- **Total amount of disk used:** 180.94 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"ממשותדיק - חבֿרה, איך אַרבעט איצט אױף אַ זשורנאַל. טאָמער איר האָט עפּעס צוצוגעבן זאָלט איר שיקן מיר אַן אָנזאָג. ס'װעט הײסן \\\"..."
}
```
#### unshuffled_original_yo
- **Size of downloaded dataset files:** 0.01 MB
- **Size of the generated dataset:** 0.06 MB
- **Total amount of disk used:** 0.06 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 0,
"text": "\"Copyright © 2018 BBC. BBC kò mọ̀ nípa àwọn ohun tí ó wà ní àwọn ojú òpó tí ó wà ní ìta. Ọwọ́ tí a fi mú ìbáṣepọ̀ ti ìta.\"..."
}
```
#### unshuffled_original_yue
- **Size of downloaded dataset files:** 0.00 MB
- **Size of the generated dataset:** 0.00 MB
- **Total amount of disk used:** 0.00 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"我 灌 我 灌 我 灌 灌 灌 我 灌 我 灌 我 灌 灌 灌 我 灌 我 灌 我 灌 灌 灌 我 灌 我 灌 我 灌 灌 灌 我 灌 我 灌 我 灌 灌 灌 我 灌 我 灌 我 灌 灌 灌 你還不爆 我累了 投降輸一半可以嗎\"..."
}
```
#### unshuffled_original_zh
- **Size of downloaded dataset files:** 206.00 GB
- **Size of the generated dataset:** 545.61 GB
- **Total amount of disk used:** 751.61 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": 1,
"text": "\"中国铝灰网 中国有色金属矿产网 中国黄莲网 中国水轮发电机网 中国抽油泵网 中国数控雕刻机网 中国不锈钢抛光网 中国磨具加工网 中国压铸铝网 中国耐水腻子网 中国手机摄像头网 中国粗粮网 中国车门锁网 中国钛粉网 中国轮圈网\\n天天中奖彩票图 天天中彩票..."
}
```
</details>
### Data Fields
The data fields are the same among all configs.
- `id`: a `int64` feature.
- `text`: a `string` feature.
### Data Splits
<details>
<summary>Click to expand the number of samples per configuration</summary>
| Language | Language code | Name original | Train original | Words original | Size original | Name deduplicated | Train deduplicated | Words deduplicated | Size deduplicated |
| ----------------- | ------------- | ----------------------- | -------------- | --------------- | ------------- | --------------------------- | ------------------ | ------------------ | ----------------- |
| Afrikaans | af | unshuffled_original_af | 201117 | 43,482,801 | 241M | unshuffled_deduplicated_af | 130640 | 29,533,437 | 163M |
| Albanian | sq | unshuffled_original_sq | 672077 | 374,196,110 | 2.3G | unshuffled_deduplicated_sq | 461598 | 186,856,699 | 1.2G |
| Alemannic | als | unshuffled_original_als | 7324 | 841,750 | 5.0M | unshuffled_deduplicated_als | 4518 | 459,001 | 2.8M |
| Amharic | am | unshuffled_original_am | 83663 | 28,301,601 | 360M | unshuffled_deduplicated_am | 43102 | 16,086,628 | 206M |
| Arabic | ar | unshuffled_original_ar | 16365602 | 8,117,162,828 | 82G | unshuffled_deduplicated_ar | 9006977 | 3,171,221,354 | 32G |
| Aragonese | an | unshuffled_original_an | 2449 | 52,896 | 1.3M | unshuffled_deduplicated_an | 2025 | 45,669 | 801K |
| Armenian | hy | unshuffled_original_hy | 659430 | 273,919,388 | 3.7G | unshuffled_deduplicated_hy | 396093 | 110,196,043 | 1.5G |
| Assamese | as | unshuffled_original_as | 14985 | 6,956,663 | 113M | unshuffled_deduplicated_as | 9212 | 4,366,570 | 71M |
| Asturian | ast | unshuffled_original_ast | 6999 | 381,005 | 2.4M | unshuffled_deduplicated_ast | 5343 | 325,237 | 2.0M |
| Avaric | av | unshuffled_original_av | 456 | 24,720 | 409K | unshuffled_deduplicated_av | 360 | 19,478 | 324K |
| Azerbaijani | az | unshuffled_original_az | 912330 | 322,641,710 | 2.8G | unshuffled_deduplicated_az | 626796 | 167,742,296 | 1.5G |
| Bashkir | ba | unshuffled_original_ba | 42551 | 9,796,764 | 128M | unshuffled_deduplicated_ba | 27050 | 6,922,589 | 90M |
| Basque | eu | unshuffled_original_eu | 506883 | 120,456,652 | 848M | unshuffled_deduplicated_eu | 256513 | 45,359,710 | 342M |
| Bavarian | bar | unshuffled_original_bar | 4 | 399 | 503 | unshuffled_deduplicated_bar | 4 | 399 | 503 |
| Belarusian | be | unshuffled_original_be | 586031 | 144,579,630 | 1.8G | unshuffled_deduplicated_be | 307405 | 83,499,037 | 1.1G |
| Bengali | bn | unshuffled_original_bn | 1675515 | 623,575,733 | 11G | unshuffled_deduplicated_bn | 1114481 | 363,766,143 | 5.8G |
| Bihari | bh | unshuffled_original_bh | 336 | 8,848 | 110K | unshuffled_deduplicated_bh | 82 | 2,875 | 34K |
| Bishnupriya | bpy | unshuffled_original_bpy | 6046 | 198,286 | 4.1M | unshuffled_deduplicated_bpy | 1770 | 96,940 | 1.7M |
| Bosnian | bs | unshuffled_original_bs | 2143 | 106,448 | 447K | unshuffled_deduplicated_bs | 702 | 20,485 | 116K |
| Breton | br | unshuffled_original_br | 37085 | 5,013,241 | 29M | unshuffled_deduplicated_br | 14724 | 2,890,384 | 16M |
| Bulgarian | bg | unshuffled_original_bg | 5869686 | 2,947,648,106 | 32G | unshuffled_deduplicated_bg | 3398679 | 1,268,114,977 | 14G |
| Burmese | my | unshuffled_original_my | 232329 | 56,111,184 | 1.9G | unshuffled_deduplicated_my | 136639 | 30,102,173 | 1.1G |
| Catalan | ca | unshuffled_original_ca | 4390754 | 1,360,212,450 | 8.0G | unshuffled_deduplicated_ca | 2458067 | 729,333,440 | 4.3G |
| Cebuano | ceb | unshuffled_original_ceb | 56248 | 6,603,567 | 39M | unshuffled_deduplicated_ceb | 26145 | 3,675,024 | 24M |
| Central Bikol | bcl | unshuffled_original_bcl | 1 | 312 | 885 | unshuffled_deduplicated_bcl | 1 | 312 | 885 |
| Central Khmer | km | unshuffled_original_km | 159363 | 20,690,610 | 1.1G | unshuffled_deduplicated_km | 108346 | 10,082,245 | 581M |
| Central Kurdish | ckb | unshuffled_original_ckb | 103639 | 48,478,334 | 487M | unshuffled_deduplicated_ckb | 68210 | 18,726,721 | 226M |
| Chavacano | cbk | unshuffled_original_cbk | 1 | 130 | 520 | unshuffled_deduplicated_cbk | 1 | 130 | 520 |
| Chechen | ce | unshuffled_original_ce | 4042 | 711,051 | 8.3M | unshuffled_deduplicated_ce | 2984 | 568,146 | 6.7M |
| Chinese | zh | unshuffled_original_zh | 60137667 | 14,986,424,850 | 508G | unshuffled_deduplicated_zh | 41708901 | 6,350,215,113 | 249G |
| Chuvash | cv | unshuffled_original_cv | 20281 | 3,041,614 | 39M | unshuffled_deduplicated_cv | 10130 | 2,054,810 | 26M |
| Cornish | kw | unshuffled_original_kw | 203 | 8,329 | 44K | unshuffled_deduplicated_kw | 68 | 2,704 | 14K |
| Croatian | hr | unshuffled_original_hr | 582219 | 34,232,765 | 226M | unshuffled_deduplicated_hr | 321484 | 16,727,640 | 110M |
| Czech | cs | unshuffled_original_cs | 21001388 | 7,715,977,441 | 53G | unshuffled_deduplicated_cs | 12308039 | 3,540,997,509 | 24G |
| Danish | da | unshuffled_original_da | 7664010 | 2,637,463,889 | 16G | unshuffled_deduplicated_da | 4771098 | 1,620,091,317 | 9.5G |
| Dhivehi | dv | unshuffled_original_dv | 21018 | 7,559,472 | 126M | unshuffled_deduplicated_dv | 17024 | 4,726,660 | 79M |
| Dimli | diq | unshuffled_original_diq | 1 | 19 | 146 | unshuffled_deduplicated_diq | 1 | 19 | 146 |
| Dutch | nl | unshuffled_original_nl | 34682142 | 13,020,136,373 | 78G | unshuffled_deduplicated_nl | 20812149 | 6,598,786,137 | 39G |
| Eastern Mari | mhr | unshuffled_original_mhr | 3212 | 565,992 | 7.2M | unshuffled_deduplicated_mhr | 2515 | 469,297 | 6.0M |
| Egyptian Arabic | arz | unshuffled_original_arz | 158113 | 7,305,151 | 66M | unshuffled_deduplicated_arz | 79928 | 3,659,419 | 33M |
| Emilian-Romagnol | eml | unshuffled_original_eml | 84 | 6,376 | 25K | unshuffled_deduplicated_eml | 80 | 6,121 | 24K |
| English | en | unshuffled_original_en | 455994980 | 418,187,793,408 | 2.3T | unshuffled_deduplicated_en | 304230423 | 215,841,256,971 | 1.2T |
| Erzya | myv | unshuffled_original_myv | 6 | 90 | 1.4K | unshuffled_deduplicated_myv | 5 | 78 | 1.2K |
| Esperanto | eo | unshuffled_original_eo | 121171 | 48,486,161 | 299M | unshuffled_deduplicated_eo | 84752 | 37,324,446 | 228M |
| Estonian | et | unshuffled_original_et | 2093621 | 643,163,730 | 4.8G | unshuffled_deduplicated_et | 1172041 | 309,931,463 | 2.3G |
| Finnish | fi | unshuffled_original_fi | 8557453 | 3,196,666,419 | 27G | unshuffled_deduplicated_fi | 5326443 | 1,597,855,468 | 13G |
| French | fr | unshuffled_original_fr | 96742378 | 46,896,036,417 | 282G | unshuffled_deduplicated_fr | 59448891 | 23,206,776,649 | 138G |
| Galician | gl | unshuffled_original_gl | 544388 | 102,011,291 | 620M | unshuffled_deduplicated_gl | 284320 | 63,600,602 | 384M |
| Georgian | ka | unshuffled_original_ka | 563916 | 171,950,621 | 3.6G | unshuffled_deduplicated_ka | 372158 | 91,569,739 | 1.9G |
| German | de | unshuffled_original_de | 104913504 | 44,878,908,446 | 308G | unshuffled_deduplicated_de | 62398034 | 21,529,164,172 | 145G |
| Goan Konkani | gom | unshuffled_original_gom | 640 | 124,277 | 2.2M | unshuffled_deduplicated_gom | 484 | 102,306 | 1.8M |
| Guarani | gn | unshuffled_original_gn | 106 | 7,382 | 36K | unshuffled_deduplicated_gn | 68 | 4,680 | 24K |
| Gujarati | gu | unshuffled_original_gu | 240691 | 72,045,701 | 1.1G | unshuffled_deduplicated_gu | 169834 | 50,023,432 | 722M |
| Haitian | ht | unshuffled_original_ht | 13 | 1,014 | 3.9K | unshuffled_deduplicated_ht | 9 | 832 | 3.3K |
| Hebrew | he | unshuffled_original_he | 3808397 | 2,067,753,528 | 20G | unshuffled_deduplicated_he | 2375030 | 1,032,018,056 | 9.8G |
| Hindi | hi | unshuffled_original_hi | 3264660 | 1,372,234,782 | 17G | unshuffled_deduplicated_hi | 1909387 | 745,774,934 | 8.9G |
| Hungarian | hu | unshuffled_original_hu | 11197780 | 5,163,936,345 | 40G | unshuffled_deduplicated_hu | 6582908 | 2,339,127,555 | 18G |
| Icelandic | is | unshuffled_original_is | 625673 | 219,900,094 | 1.5G | unshuffled_deduplicated_is | 389515 | 129,818,331 | 846M |
| Ido | io | unshuffled_original_io | 694 | 25,702 | 147K | unshuffled_deduplicated_io | 617 | 22,773 | 130K |
| Iloko | ilo | unshuffled_original_ilo | 2638 | 142,942 | 874K | unshuffled_deduplicated_ilo | 1578 | 105,564 | 636K |
| Indonesian | id | unshuffled_original_id | 16236463 | 4,574,692,265 | 30G | unshuffled_deduplicated_id | 9948521 | 2,394,957,629 | 16G |
| Interlingua | ia | unshuffled_original_ia | 1040 | 180,231 | 662K | unshuffled_deduplicated_ia | 529 | 100,019 | 360K |
| Interlingue | ie | unshuffled_original_ie | 101 | 5,352 | 24K | unshuffled_deduplicated_ie | 11 | 602 | 1.6K |
| Irish | ga | unshuffled_original_ga | 83223 | 14,483,593 | 88M | unshuffled_deduplicated_ga | 46493 | 10,017,303 | 60M |
| Italian | it | unshuffled_original_it | 46981781 | 22,248,707,341 | 137G | unshuffled_deduplicated_it | 28522082 | 11,250,012,896 | 69G |
| Japanese | ja | unshuffled_original_ja | 62721527 | 4,962,979,182 | 216G | unshuffled_deduplicated_ja | 39496439 | 1,123,067,063 | 106G |
| Javanese | jv | unshuffled_original_jv | 1445 | 104,896 | 659K | unshuffled_deduplicated_jv | 1163 | 86,654 | 583K |
| Kalmyk | xal | unshuffled_original_xal | 39 | 10,277 | 113K | unshuffled_deduplicated_xal | 36 | 10,155 | 112K |
| Kannada | kn | unshuffled_original_kn | 350363 | 81,186,863 | 1.7G | unshuffled_deduplicated_kn | 251064 | 49,343,462 | 1.1G |
| Karachay-Balkar | krc | unshuffled_original_krc | 1581 | 185,436 | 2.6M | unshuffled_deduplicated_krc | 1377 | 166,496 | 2.3M |
| Kazakh | kk | unshuffled_original_kk | 524591 | 191,126,469 | 2.7G | unshuffled_deduplicated_kk | 338073 | 108,388,743 | 1.5G |
| Kirghiz | ky | unshuffled_original_ky | 146993 | 44,194,823 | 600M | unshuffled_deduplicated_ky | 86561 | 28,982,620 | 388M |
| Komi | kv | unshuffled_original_kv | 1549 | 201,404 | 2.3M | unshuffled_deduplicated_kv | 924 | 95,243 | 1.2M |
| Korean | ko | unshuffled_original_ko | 7345075 | 2,368,765,142 | 24G | unshuffled_deduplicated_ko | 3675420 | 1,120,375,149 | 12G |
| Kurdish | ku | unshuffled_original_ku | 46535 | 15,561,003 | 94M | unshuffled_deduplicated_ku | 29054 | 9,946,440 | 60M |
| Lao | lo | unshuffled_original_lo | 52910 | 4,133,311 | 174M | unshuffled_deduplicated_lo | 32652 | 2,583,342 | 114M |
| Latin | la | unshuffled_original_la | 94588 | 4,122,201 | 26M | unshuffled_deduplicated_la | 18808 | 1,328,038 | 8.3M |
| Latvian | lv | unshuffled_original_lv | 1593820 | 520,761,977 | 4.0G | unshuffled_deduplicated_lv | 843195 | 236,428,905 | 1.8G |
| Lezghian | lez | unshuffled_original_lez | 1485 | 247,646 | 3.3M | unshuffled_deduplicated_lez | 1381 | 224,871 | 3.0M |
| Limburgan | li | unshuffled_original_li | 137 | 4,730 | 29K | unshuffled_deduplicated_li | 118 | 4,283 | 27K |
| Lithuanian | lt | unshuffled_original_lt | 2977757 | 1,159,661,742 | 8.8G | unshuffled_deduplicated_lt | 1737411 | 516,183,525 | 3.9G |
| Lojban | jbo | unshuffled_original_jbo | 832 | 154,330 | 736K | unshuffled_deduplicated_jbo | 617 | 141,973 | 678K |
| Lombard | lmo | unshuffled_original_lmo | 1401 | 75,229 | 443K | unshuffled_deduplicated_lmo | 1374 | 73,665 | 433K |
| Low German | nds | unshuffled_original_nds | 18174 | 2,906,347 | 18M | unshuffled_deduplicated_nds | 8714 | 2,146,417 | 13M |
| Lower Sorbian | dsb | unshuffled_original_dsb | 65 | 1,787 | 13K | unshuffled_deduplicated_dsb | 37 | 966 | 7.1K |
| Luxembourgish | lb | unshuffled_original_lb | 34807 | 4,403,577 | 29M | unshuffled_deduplicated_lb | 21735 | 3,087,650 | 21M |
| Macedonian | mk | unshuffled_original_mk | 437871 | 189,289,873 | 2.1G | unshuffled_deduplicated_mk | 299457 | 102,849,595 | 1.2G |
| Maithili | mai | unshuffled_original_mai | 123 | 69,161 | 317K | unshuffled_deduplicated_mai | 25 | 874 | 11K |
| Malagasy | mg | unshuffled_original_mg | 17957 | 3,068,360 | 21M | unshuffled_deduplicated_mg | 13343 | 1,872,044 | 13M |
| Malay | ms | unshuffled_original_ms | 534016 | 16,696,882 | 111M | unshuffled_deduplicated_ms | 183443 | 6,045,753 | 42M |
| Malayalam | ml | unshuffled_original_ml | 603937 | 189,534,472 | 4.9G | unshuffled_deduplicated_ml | 453904 | 95,892,551 | 2.5G |
| Maltese | mt | unshuffled_original_mt | 26598 | 2,995,654 | 24M | unshuffled_deduplicated_mt | 16383 | 2,163,358 | 17M |
| Marathi | mr | unshuffled_original_mr | 326804 | 162,609,404 | 2.7G | unshuffled_deduplicated_mr | 212556 | 82,130,803 | 1.4G |
| Mazanderani | mzn | unshuffled_original_mzn | 1055 | 73,870 | 691K | unshuffled_deduplicated_mzn | 917 | 64,481 | 602K |
| Minangkabau | min | unshuffled_original_min | 220 | 5,682 | 608K | unshuffled_deduplicated_min | 166 | 4,825 | 310K |
| Mingrelian | xmf | unshuffled_original_xmf | 3783 | 299,098 | 5.8M | unshuffled_deduplicated_xmf | 2418 | 228,629 | 4.4M |
| Mirandese | mwl | unshuffled_original_mwl | 8 | 171 | 1.2K | unshuffled_deduplicated_mwl | 7 | 152 | 1.1K |
| Modern Greek | el | unshuffled_original_el | 10425596 | 5,479,180,137 | 62G | unshuffled_deduplicated_el | 6521169 | 2,412,419,435 | 27G |
| Mongolian | mn | unshuffled_original_mn | 395605 | 181,307,167 | 2.2G | unshuffled_deduplicated_mn | 197878 | 68,362,013 | 838M |
| Nahuatl languages | nah | unshuffled_original_nah | 61 | 1,234 | 12K | unshuffled_deduplicated_nah | 58 | 1,193 | 11K |
| Neapolitan | nap | unshuffled_original_nap | 73 | 5,282 | 17K | unshuffled_deduplicated_nap | 55 | 4,147 | 13K |
| Nepali | ne | unshuffled_original_ne | 299938 | 107,448,208 | 1.8G | unshuffled_deduplicated_ne | 219334 | 71,628,317 | 1.2G |
| Newari | new | unshuffled_original_new | 4696 | 564,697 | 5.5M | unshuffled_deduplicated_new | 2126 | 288,995 | 4.1M |
| Northern Frisian | frr | unshuffled_original_frr | 7 | 1,516 | 4.4K | unshuffled_deduplicated_frr | 7 | 1,516 | 4.4K |
| Northern Luri | lrc | unshuffled_original_lrc | 88 | 8,022 | 76K | unshuffled_deduplicated_lrc | 72 | 6,740 | 63K |
| Norwegian | no | unshuffled_original_no | 5546211 | 1,344,326,388 | 8.0G | unshuffled_deduplicated_no | 3229940 | 804,894,377 | 4.7G |
| Norwegian Nynorsk | nn | unshuffled_original_nn | 185884 | 14,764,980 | 85M | unshuffled_deduplicated_nn | 109118 | 9,435,139 | 54M |
| Occitan | oc | unshuffled_original_oc | 10709 | 750,301 | 5.8M | unshuffled_deduplicated_oc | 6485 | 512,678 | 3.7M |
| Oriya | or | unshuffled_original_or | 59463 | 14,938,567 | 248M | unshuffled_deduplicated_or | 44230 | 11,321,740 | 188M |
| Ossetian | os | unshuffled_original_os | 5213 | 1,031,268 | 13M | unshuffled_deduplicated_os | 2559 | 878,765 | 11M |
| Pampanga | pam | unshuffled_original_pam | 3 | 130 | 760 | unshuffled_deduplicated_pam | 1 | 52 | 304 |
| Panjabi | pa | unshuffled_original_pa | 127467 | 61,847,806 | 763M | unshuffled_deduplicated_pa | 87235 | 37,555,835 | 460M |
| Persian | fa | unshuffled_original_fa | 13704702 | 9,096,554,121 | 79G | unshuffled_deduplicated_fa | 8203495 | 4,363,505,319 | 38G |
| Piemontese | pms | unshuffled_original_pms | 3225 | 362,013 | 2.1M | unshuffled_deduplicated_pms | 2859 | 337,246 | 1.9M |
| Polish | pl | unshuffled_original_pl | 35440972 | 15,277,255,137 | 109G | unshuffled_deduplicated_pl | 20682611 | 6,708,709,674 | 47G |
| Portuguese | pt | unshuffled_original_pt | 42114520 | 20,641,903,898 | 124G | unshuffled_deduplicated_pt | 26920397 | 10,751,156,918 | 64G |
| Pushto | ps | unshuffled_original_ps | 98216 | 46,559,441 | 361M | unshuffled_deduplicated_ps | 67921 | 31,347,348 | 242M |
| Quechua | qu | unshuffled_original_qu | 452 | 10,186 | 78K | unshuffled_deduplicated_qu | 411 | 8,691 | 67K |
| Romanian | ro | unshuffled_original_ro | 9387265 | 3,984,317,058 | 25G | unshuffled_deduplicated_ro | 5044757 | 1,741,794,069 | 11G |
| Romansh | rm | unshuffled_original_rm | 41 | 1,093 | 7.4K | unshuffled_deduplicated_rm | 34 | 960 | 6.5K |
| Russia Buriat | bxr | unshuffled_original_bxr | 42 | 963 | 13K | unshuffled_deduplicated_bxr | 36 | 809 | 11K |
| Russian | ru | unshuffled_original_ru | 161836003 | 92,522,407,837 | 1.2T | unshuffled_deduplicated_ru | 115954598 | 46,692,691,520 | 568G |
| Sanskrit | sa | unshuffled_original_sa | 14291 | 4,331,569 | 93M | unshuffled_deduplicated_sa | 7121 | 1,713,930 | 37M |
| Scottish Gaelic | gd | unshuffled_original_gd | 5799 | 310,689 | 1.9M | unshuffled_deduplicated_gd | 3883 | 207,110 | 1.3M |
| Serbian | sr | unshuffled_original_sr | 1013619 | 364,395,411 | 3.9G | unshuffled_deduplicated_sr | 645747 | 207,561,168 | 2.2G |
| Serbo-Croatian | sh | unshuffled_original_sh | 36700 | 5,292,184 | 25M | unshuffled_deduplicated_sh | 17610 | 1,040,573 | 5.8M |
| Sicilian | scn | unshuffled_original_scn | 21 | 554 | 3.3K | unshuffled_deduplicated_scn | 17 | 468 | 2.8K |
| Sindhi | sd | unshuffled_original_sd | 44280 | 43,530,158 | 347M | unshuffled_deduplicated_sd | 33925 | 33,028,015 | 263M |
| Sinhala | si | unshuffled_original_si | 203082 | 93,053,465 | 1.4G | unshuffled_deduplicated_si | 120684 | 50,864,857 | 802M |
| Slovak | sk | unshuffled_original_sk | 5492194 | 1,322,247,763 | 9.1G | unshuffled_deduplicated_sk | 2820821 | 656,346,179 | 4.5G |
| Slovenian | sl | unshuffled_original_sl | 1746604 | 387,399,700 | 2.5G | unshuffled_deduplicated_sl | 886223 | 193,926,684 | 1.3G |
| Somali | so | unshuffled_original_so | 156 | 1,202 | 61K | unshuffled_deduplicated_so | 42 | 472 | 16K |
| South Azerbaijani | azb | unshuffled_original_azb | 15446 | 2,175,054 | 27M | unshuffled_deduplicated_azb | 9985 | 1,528,709 | 19M |
| Spanish | es | unshuffled_original_es | 88199221 | 47,545,122,279 | 278G | unshuffled_deduplicated_es | 56326016 | 25,928,290,729 | 149G |
| Sundanese | su | unshuffled_original_su | 805 | 30,321 | 211K | unshuffled_deduplicated_su | 511 | 20,278 | 141K |
| Swahili | sw | unshuffled_original_sw | 41986 | 2,211,927 | 13M | unshuffled_deduplicated_sw | 24803 | 1,376,963 | 8.1M |
| Swedish | sv | unshuffled_original_sv | 17395625 | 7,155,994,312 | 44G | unshuffled_deduplicated_sv | 11014487 | 4,106,120,608 | 25G |
| Tagalog | tl | unshuffled_original_tl | 458206 | 98,949,299 | 573M | unshuffled_deduplicated_tl | 294132 | 70,121,601 | 407M |
| Tajik | tg | unshuffled_original_tg | 89002 | 31,758,142 | 379M | unshuffled_deduplicated_tg | 56259 | 21,029,893 | 249M |
| Tamil | ta | unshuffled_original_ta | 1263280 | 420,537,132 | 9.3G | unshuffled_deduplicated_ta | 833101 | 226,013,330 | 5.1G |
| Tatar | tt | unshuffled_original_tt | 135923 | 51,034,893 | 670M | unshuffled_deduplicated_tt | 82738 | 23,825,695 | 305M |
| Telugu | te | unshuffled_original_te | 475703 | 123,711,517 | 2.5G | unshuffled_deduplicated_te | 312644 | 79,094,167 | 1.6G |
| Thai | th | unshuffled_original_th | 6064129 | 951,743,087 | 36G | unshuffled_deduplicated_th | 3749826 | 368,965,202 | 16G |
| Tibetan | bo | unshuffled_original_bo | 26795 | 1,483,589 | 187M | unshuffled_deduplicated_bo | 15762 | 936,556 | 138M |
| Turkish | tr | unshuffled_original_tr | 18535253 | 7,577,388,700 | 60G | unshuffled_deduplicated_tr | 11596446 | 3,365,734,289 | 27G |
| Turkmen | tk | unshuffled_original_tk | 6456 | 1,113,869 | 11M | unshuffled_deduplicated_tk | 4694 | 752,326 | 6.8M |
| Tuvinian | tyv | unshuffled_original_tyv | 34 | 759 | 12K | unshuffled_deduplicated_tyv | 24 | 540 | 7.9K |
| Uighur | ug | unshuffled_original_ug | 22255 | 8,657,141 | 122M | unshuffled_deduplicated_ug | 15503 | 5,852,225 | 83M |
| Ukrainian | uk | unshuffled_original_uk | 12973467 | 4,204,381,276 | 53G | unshuffled_deduplicated_uk | 7782375 | 2,252,380,351 | 28G |
| Upper Sorbian | hsb | unshuffled_original_hsb | 7959 | 545,351 | 4.2M | unshuffled_deduplicated_hsb | 3084 | 236,867 | 1.8M |
| Urdu | ur | unshuffled_original_ur | 638596 | 331,817,982 | 2.7G | unshuffled_deduplicated_ur | 428674 | 218,030,228 | 1.7G |
| Uzbek | uz | unshuffled_original_uz | 27537 | 2,450,256 | 21M | unshuffled_deduplicated_uz | 15074 | 1,381,644 | 12M |
| Venetian | vec | unshuffled_original_vec | 73 | 3,492 | 18K | unshuffled_deduplicated_vec | 64 | 3,199 | 17K |
| Vietnamese | vi | unshuffled_original_vi | 14898250 | 12,036,845,359 | 68G | unshuffled_deduplicated_vi | 9897709 | 5,577,159,843 | 32G |
| Volapük | vo | unshuffled_original_vo | 3366 | 321,121 | 2.0M | unshuffled_deduplicated_vo | 3317 | 318,568 | 2.0M |
| Walloon | wa | unshuffled_original_wa | 1001 | 50,720 | 273K | unshuffled_deduplicated_wa | 677 | 37,543 | 203K |
| Waray | war | unshuffled_original_war | 9760 | 397,315 | 2.5M | unshuffled_deduplicated_war | 9161 | 336,311 | 2.2M |
| Welsh | cy | unshuffled_original_cy | 157698 | 37,422,441 | 213M | unshuffled_deduplicated_cy | 98225 | 23,574,673 | 133M |
| Western Frisian | fy | unshuffled_original_fy | 33053 | 5,691,077 | 35M | unshuffled_deduplicated_fy | 20661 | 4,223,816 | 26M |
| Western Mari | mrj | unshuffled_original_mrj | 757 | 93,338 | 1.2M | unshuffled_deduplicated_mrj | 669 | 87,780 | 1.1M |
| Western Panjabi | pnb | unshuffled_original_pnb | 4599 | 1,426,986 | 12M | unshuffled_deduplicated_pnb | 3463 | 1,111,112 | 9.0M |
| Wu Chinese | wuu | unshuffled_original_wuu | 214 | 11,189 | 109K | unshuffled_deduplicated_wuu | 64 | 4,333 | 32K |
| Yakut | sah | unshuffled_original_sah | 22301 | 2,547,623 | 42M | unshuffled_deduplicated_sah | 8555 | 1,789,174 | 26M |
| Yiddish | yi | unshuffled_original_yi | 59364 | 13,834,320 | 141M | unshuffled_deduplicated_yi | 32919 | 8,212,970 | 84M |
| Yoruba | yo | unshuffled_original_yo | 214 | 8,906 | 55K | unshuffled_deduplicated_yo | 49 | 3,518 | 27K |
| Yue Chinese | yue | unshuffled_original_yue | 11 | 186 | 3.7K | unshuffled_deduplicated_yue | 7 | 128 | 2.2K |
</details>
## Dataset Creation
### Curation Rationale
OSCAR was constructed new pipeline derived from the [fastText's one](https://github.com/facebookresearch/fastText), called [_goclassy_](https://github.com/pjox/goclassy). Goclassy reuses the [fastText linear classifier](https://fasttext.cc) and the pre-trained fastText model for language recognition, but it completely rewrites and parallelises their pipeline in an asynchronous manner.
The order of operations is more or less the same as in the fastText pre-processing pipeline but instead of clustering multiple operations into a single blocking process, a worker is launched for each operation but bounding the number of possible parallel operations at a given time by the number of available threads instead of the number of CPUs. Goclassy is implemented in the [Go programming language](https://golang.org/) so it lets the [Go runtime](https://golang.org/src/runtime/mprof.go) handle the scheduling of the processes. Thus the goclassy's pipeline one does not have to wait for a whole WET file to download, decompress and classify in order to start downloading and processing the next one, a new file will start downloading and processing as soon as the scheduler is able to allocate a new process.
Filtering and cleaning processes at line level are done before feeding each line to the classifier. Lines shorter than 100 UTF-8 characters and lines containing invalid UTF-8 characters are discarted and are not classified. After all files are proccesed the deduplicated versions are constructed and everything is then splitted in shards and compressed.
### Source Data
#### Initial Data Collection and Normalization
[Common Crawl](https://commoncrawl.org/) is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organisation's crawlers has always respected [nofollow](http://microformats.org/wiki/rel-nofollow) and [robots.txt](https://www.robotstxt.org/) policies.
Each monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics.
To construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of OSCAR, the **November 2018** snapshot was used. It surpasses 20TB of uncompressed data and contains more than 50 thousand plain text files where each file consists of the plain text from multiple websites along its metadata header.
#### Who are the source language producers?
The data comes from multiple web pages in a large variety of languages.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
Being constructed from Common Crawl, Personal and sensitive information might be present. This **must** be considered before training deep learning models with OSCAR, specially in the case of text-generation models.
## Considerations for Using the Data
### Social Impact of Dataset
OSCAR is intended to bring more data to a wide variety of lanuages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures.
### Discussion of Biases
OSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models.
### Other Known Limitations
The [fastText linear classifier](https://fasttext.cc) is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource langiuages. Some audits have already been done by [third parties](https://arxiv.org/abs/2010.14571).
## Additional Information
### Dataset Curators
The corpus was put together by [Pedro J. Ortiz](https://pjortiz.eu/), [Benoît Sagot](http://pauillac.inria.fr/~sagot/), and [Laurent Romary](https://cv.archives-ouvertes.fr/laurentromary), during work done at [Inria](https://www.inria.fr/en), particularly at the [ALMAnaCH team](https://team.inria.fr/almanach/).
### Licensing Information
These data are released under this licensing scheme
We do not own any of the text from which these data has been extracted.
We license the actual packaging of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/
To the extent possible under law, Inria has waived all copyright and related or neighboring rights to OSCAR
This work is published from: France.
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
### Citation Information
```
@inproceedings{ortiz-suarez-etal-2020-monolingual,
title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages",
author = "Ortiz Su{'a}rez, Pedro Javier and
Romary, Laurent and
Sagot, Benoit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.156",
pages = "1703--1714",
abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.",
}
@inproceedings{OrtizSuarezSagotRomary2019,
author = {Pedro Javier {Ortiz Su{'a}rez} and Benoit Sagot and Laurent Romary},
title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019},
editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{"u}ngen and Caroline Iliadi},
publisher = {Leibniz-Institut f{"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-9021},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
pages = {9 -- 16},
year = {2019},
abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.},
language = {en}
}
```
### Contributions
Thanks to [@pjox](https://github.com/pjox) and [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
para_crawl | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hr
- hu
- it
- lt
- lv
- mt
- nl
- pl
- pt
- ro
- sk
- sl
- sv
license:
- cc0-1.0
multilinguality:
- translation
pretty_name: ParaCrawl
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: paracrawl
dataset_info:
- config_name: enbg
features:
- name: translation
dtype:
translation:
languages:
- en
- bg
splits:
- name: train
num_bytes: 356532771
num_examples: 1039885
download_size: 103743335
dataset_size: 356532771
- config_name: encs
features:
- name: translation
dtype:
translation:
languages:
- en
- cs
splits:
- name: train
num_bytes: 638068353
num_examples: 2981949
download_size: 196410022
dataset_size: 638068353
- config_name: enda
features:
- name: translation
dtype:
translation:
languages:
- en
- da
splits:
- name: train
num_bytes: 598624306
num_examples: 2414895
download_size: 182804827
dataset_size: 598624306
- config_name: ende
features:
- name: translation
dtype:
translation:
languages:
- en
- de
splits:
- name: train
num_bytes: 3997191986
num_examples: 16264448
download_size: 1307754745
dataset_size: 3997191986
- config_name: enel
features:
- name: translation
dtype:
translation:
languages:
- en
- el
splits:
- name: train
num_bytes: 688069020
num_examples: 1985233
download_size: 193553374
dataset_size: 688069020
- config_name: enes
features:
- name: translation
dtype:
translation:
languages:
- en
- es
splits:
- name: train
num_bytes: 6209466040
num_examples: 21987267
download_size: 1953839527
dataset_size: 6209466040
- config_name: enet
features:
- name: translation
dtype:
translation:
languages:
- en
- et
splits:
- name: train
num_bytes: 201408919
num_examples: 853422
download_size: 70158650
dataset_size: 201408919
- config_name: enfi
features:
- name: translation
dtype:
translation:
languages:
- en
- fi
splits:
- name: train
num_bytes: 524624150
num_examples: 2156069
download_size: 159209242
dataset_size: 524624150
- config_name: enfr
features:
- name: translation
dtype:
translation:
languages:
- en
- fr
splits:
- name: train
num_bytes: 9015440258
num_examples: 31374161
download_size: 2827554088
dataset_size: 9015440258
- config_name: enga
features:
- name: translation
dtype:
translation:
languages:
- en
- ga
splits:
- name: train
num_bytes: 104523278
num_examples: 357399
download_size: 29394367
dataset_size: 104523278
- config_name: enhr
features:
- name: translation
dtype:
translation:
languages:
- en
- hr
splits:
- name: train
num_bytes: 247646552
num_examples: 1002053
download_size: 84904103
dataset_size: 247646552
- config_name: enhu
features:
- name: translation
dtype:
translation:
languages:
- en
- hu
splits:
- name: train
num_bytes: 403168065
num_examples: 1901342
download_size: 119784765
dataset_size: 403168065
- config_name: enit
features:
- name: translation
dtype:
translation:
languages:
- en
- it
splits:
- name: train
num_bytes: 3340542050
num_examples: 12162239
download_size: 1066720197
dataset_size: 3340542050
- config_name: enlt
features:
- name: translation
dtype:
translation:
languages:
- en
- lt
splits:
- name: train
num_bytes: 197053694
num_examples: 844643
download_size: 66358392
dataset_size: 197053694
- config_name: enlv
features:
- name: translation
dtype:
translation:
languages:
- en
- lv
splits:
- name: train
num_bytes: 142409870
num_examples: 553060
download_size: 47368967
dataset_size: 142409870
- config_name: enmt
features:
- name: translation
dtype:
translation:
languages:
- en
- mt
splits:
- name: train
num_bytes: 52786023
num_examples: 195502
download_size: 19028352
dataset_size: 52786023
- config_name: ennl
features:
- name: translation
dtype:
translation:
languages:
- en
- nl
splits:
- name: train
num_bytes: 1384042007
num_examples: 5659268
download_size: 420090979
dataset_size: 1384042007
- config_name: enpl
features:
- name: translation
dtype:
translation:
languages:
- en
- pl
splits:
- name: train
num_bytes: 854786500
num_examples: 3503276
download_size: 270427885
dataset_size: 854786500
- config_name: enpt
features:
- name: translation
dtype:
translation:
languages:
- en
- pt
splits:
- name: train
num_bytes: 2031891156
num_examples: 8141940
download_size: 638184462
dataset_size: 2031891156
- config_name: enro
features:
- name: translation
dtype:
translation:
languages:
- en
- ro
splits:
- name: train
num_bytes: 518359240
num_examples: 1952043
download_size: 160684751
dataset_size: 518359240
- config_name: ensk
features:
- name: translation
dtype:
translation:
languages:
- en
- sk
splits:
- name: train
num_bytes: 337704729
num_examples: 1591831
download_size: 101307152
dataset_size: 337704729
- config_name: ensl
features:
- name: translation
dtype:
translation:
languages:
- en
- sl
splits:
- name: train
num_bytes: 182399034
num_examples: 660161
download_size: 65037465
dataset_size: 182399034
- config_name: ensv
features:
- name: translation
dtype:
translation:
languages:
- en
- sv
splits:
- name: train
num_bytes: 875576366
num_examples: 3476729
download_size: 275528370
dataset_size: 875576366
---
# Dataset Card for "para_crawl"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://paracrawl.eu/releases.html](https://paracrawl.eu/releases.html)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 10.36 GB
- **Size of the generated dataset:** 32.90 GB
- **Total amount of disk used:** 43.26 GB
### Dataset Summary
Web-Scale Parallel Corpora for Official European Languages.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### enbg
- **Size of downloaded dataset files:** 103.75 MB
- **Size of the generated dataset:** 356.54 MB
- **Total amount of disk used:** 460.27 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"translation": "{\"bg\": \". “A felirat faragott karnis a bejárat fölött, templom épült 14 Július 1643, A földesúr és felesége Jeremiás Murguleţ, C..."
}
```
#### encs
- **Size of downloaded dataset files:** 196.41 MB
- **Size of the generated dataset:** 638.07 MB
- **Total amount of disk used:** 834.48 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"translation": "{\"cs\": \". “A felirat faragott karnis a bejárat fölött, templom épült 14 Július 1643, A földesúr és felesége Jeremiás Murguleţ, C..."
}
```
#### enda
- **Size of downloaded dataset files:** 182.81 MB
- **Size of the generated dataset:** 598.62 MB
- **Total amount of disk used:** 781.43 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"translation": "{\"da\": \". “A felirat faragott karnis a bejárat fölött, templom épült 14 Július 1643, A földesúr és felesége Jeremiás Murguleţ, C..."
}
```
#### ende
- **Size of downloaded dataset files:** 1.31 GB
- **Size of the generated dataset:** 4.00 GB
- **Total amount of disk used:** 5.30 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"translation": "{\"de\": \". “A felirat faragott karnis a bejárat fölött, templom épült 14 Július 1643, A földesúr és felesége Jeremiás Murguleţ, C..."
}
```
#### enel
- **Size of downloaded dataset files:** 193.56 MB
- **Size of the generated dataset:** 688.07 MB
- **Total amount of disk used:** 881.62 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"translation": "{\"el\": \". “A felirat faragott karnis a bejárat fölött, templom épült 14 Július 1643, A földesúr és felesége Jeremiás Murguleţ, C..."
}
```
### Data Fields
The data fields are the same among all splits.
#### enbg
- `translation`: a multilingual `string` variable, with possible languages including `en`, `bg`.
#### encs
- `translation`: a multilingual `string` variable, with possible languages including `en`, `cs`.
#### enda
- `translation`: a multilingual `string` variable, with possible languages including `en`, `da`.
#### ende
- `translation`: a multilingual `string` variable, with possible languages including `en`, `de`.
#### enel
- `translation`: a multilingual `string` variable, with possible languages including `en`, `el`.
### Data Splits
| name | train |
|------|---------:|
| enbg | 1039885 |
| encs | 2981949 |
| enda | 2414895 |
| ende | 16264448 |
| enel | 1985233 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[Creative Commons CC0 license ("no rights reserved")](https://creativecommons.org/share-your-work/public-domain/cc0/).
### Citation Information
```
@inproceedings{banon-etal-2020-paracrawl,
title = "{P}ara{C}rawl: Web-Scale Acquisition of Parallel Corpora",
author = "Ba{\~n}{\'o}n, Marta and
Chen, Pinzhen and
Haddow, Barry and
Heafield, Kenneth and
Hoang, Hieu and
Espl{\`a}-Gomis, Miquel and
Forcada, Mikel L. and
Kamran, Amir and
Kirefu, Faheem and
Koehn, Philipp and
Ortiz Rojas, Sergio and
Pla Sempere, Leopoldo and
Ram{\'\i}rez-S{\'a}nchez, Gema and
Sarr{\'\i}as, Elsa and
Strelec, Marek and
Thompson, Brian and
Waites, William and
Wiggins, Dion and
Zaragoza, Jaume",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.acl-main.417",
doi = "10.18653/v1/2020.acl-main.417",
pages = "4555--4567",
abstract = "We report on methods to create the largest publicly available parallel corpora by crawling the web, using open source software. We empirically compare alternative methods and publish benchmark data sets for sentence alignment and sentence pair filtering. We also describe the parallel corpora released and evaluate their quality and their usefulness to create machine translation systems.",
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset. |
para_pat | ---
annotations_creators:
- machine-generated
language_creators:
- expert-generated
language:
- cs
- de
- el
- en
- es
- fr
- hu
- ja
- ko
- pt
- ro
- ru
- sk
- uk
- zh
license:
- cc-by-4.0
multilinguality:
- translation
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
- translation
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: parapat
pretty_name: Parallel Corpus of Patents Abstracts
dataset_info:
- config_name: el-en
features:
- name: index
dtype: int32
- name: family_id
dtype: int32
- name: translation
dtype:
translation:
languages:
- el
- en
splits:
- name: train
num_bytes: 24818840
num_examples: 10855
download_size: 24894705
dataset_size: 24818840
- config_name: cs-en
features:
- name: index
dtype: int32
- name: family_id
dtype: int32
- name: translation
dtype:
translation:
languages:
- cs
- en
splits:
- name: train
num_bytes: 117555722
num_examples: 78977
download_size: 118010340
dataset_size: 117555722
- config_name: en-hu
features:
- name: index
dtype: int32
- name: family_id
dtype: int32
- name: translation
dtype:
translation:
languages:
- en
- hu
splits:
- name: train
num_bytes: 80637157
num_examples: 42629
download_size: 80893995
dataset_size: 80637157
- config_name: en-ro
features:
- name: index
dtype: int32
- name: family_id
dtype: int32
- name: translation
dtype:
translation:
languages:
- en
- ro
splits:
- name: train
num_bytes: 80290819
num_examples: 48789
download_size: 80562562
dataset_size: 80290819
- config_name: en-sk
features:
- name: index
dtype: int32
- name: family_id
dtype: int32
- name: translation
dtype:
translation:
languages:
- en
- sk
splits:
- name: train
num_bytes: 31510348
num_examples: 23410
download_size: 31707728
dataset_size: 31510348
- config_name: en-uk
features:
- name: index
dtype: int32
- name: family_id
dtype: int32
- name: translation
dtype:
translation:
languages:
- en
- uk
splits:
- name: train
num_bytes: 136808871
num_examples: 89226
download_size: 137391928
dataset_size: 136808871
- config_name: es-fr
features:
- name: index
dtype: int32
- name: family_id
dtype: int32
- name: translation
dtype:
translation:
languages:
- es
- fr
splits:
- name: train
num_bytes: 53767035
num_examples: 32553
download_size: 53989438
dataset_size: 53767035
- config_name: fr-ru
features:
- name: index
dtype: int32
- name: family_id
dtype: int32
- name: translation
dtype:
translation:
languages:
- fr
- ru
splits:
- name: train
num_bytes: 33915203
num_examples: 10889
download_size: 33994490
dataset_size: 33915203
- config_name: de-fr
features:
- name: translation
dtype:
translation:
languages:
- de
- fr
splits:
- name: train
num_bytes: 655742822
num_examples: 1167988
download_size: 204094654
dataset_size: 655742822
- config_name: en-ja
features:
- name: translation
dtype:
translation:
languages:
- en
- ja
splits:
- name: train
num_bytes: 3100002828
num_examples: 6170339
download_size: 1093334863
dataset_size: 3100002828
- config_name: en-es
features:
- name: translation
dtype:
translation:
languages:
- en
- es
splits:
- name: train
num_bytes: 337690858
num_examples: 649396
download_size: 105202237
dataset_size: 337690858
- config_name: en-fr
features:
- name: translation
dtype:
translation:
languages:
- en
- fr
splits:
- name: train
num_bytes: 6103179552
num_examples: 12223525
download_size: 1846098331
dataset_size: 6103179552
- config_name: de-en
features:
- name: translation
dtype:
translation:
languages:
- de
- en
splits:
- name: train
num_bytes: 1059631418
num_examples: 2165054
download_size: 339299130
dataset_size: 1059631418
- config_name: en-ko
features:
- name: translation
dtype:
translation:
languages:
- en
- ko
splits:
- name: train
num_bytes: 1466703472
num_examples: 2324357
download_size: 475152089
dataset_size: 1466703472
- config_name: fr-ja
features:
- name: translation
dtype:
translation:
languages:
- fr
- ja
splits:
- name: train
num_bytes: 211127021
num_examples: 313422
download_size: 69038401
dataset_size: 211127021
- config_name: en-zh
features:
- name: translation
dtype:
translation:
languages:
- en
- zh
splits:
- name: train
num_bytes: 2297993338
num_examples: 4897841
download_size: 899568201
dataset_size: 2297993338
- config_name: en-ru
features:
- name: translation
dtype:
translation:
languages:
- en
- ru
splits:
- name: train
num_bytes: 1974874480
num_examples: 4296399
download_size: 567240359
dataset_size: 1974874480
- config_name: fr-ko
features:
- name: index
dtype: int32
- name: family_id
dtype: int32
- name: translation
dtype:
translation:
languages:
- fr
- ko
splits:
- name: train
num_bytes: 222006786
num_examples: 120607
download_size: 64621605
dataset_size: 222006786
- config_name: ru-uk
features:
- name: index
dtype: int32
- name: family_id
dtype: int32
- name: translation
dtype:
translation:
languages:
- ru
- uk
splits:
- name: train
num_bytes: 163442529
num_examples: 85963
download_size: 38709524
dataset_size: 163442529
- config_name: en-pt
features:
- name: index
dtype: int32
- name: family_id
dtype: int32
- name: translation
dtype:
translation:
languages:
- en
- pt
splits:
- name: train
num_bytes: 37372555
num_examples: 23121
download_size: 12781082
dataset_size: 37372555
---
# Dataset Card for ParaPat: The Multi-Million Sentences Parallel Corpus of Patents Abstracts
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [ParaPat: The Multi-Million Sentences Parallel Corpus of Patents Abstracts](https://figshare.com/articles/ParaPat_The_Multi-Million_Sentences_Parallel_Corpus_of_Patents_Abstracts/12627632)
- **Repository:** [ParaPat: The Multi-Million Sentences Parallel Corpus of Patents Abstracts](https://github.com/soares-f/parapat)
- **Paper:** [ParaPat: The Multi-Million Sentences Parallel Corpus of Patents Abstracts](https://www.aclweb.org/anthology/2020.lrec-1.465/)
- **Point of Contact:** [Felipe Soares](fs@felipesoares.net)
### Dataset Summary
ParaPat: The Multi-Million Sentences Parallel Corpus of Patents Abstracts
This dataset contains the developed parallel corpus from the open access Google Patents dataset in 74 language pairs, comprising more than 68 million sentences and 800 million tokens. Sentences were automatically aligned using the Hunalign algorithm for the largest 22 language pairs, while the others were abstract (i.e. paragraph) aligned.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset contains samples in cs, de, el, en, es, fr, hu, ja, ko, pt, ro, ru, sk, uk, zh, hu
## Dataset Structure
### Data Instances
They are of 2 types depending on the dataset:
First type
{
"translation":{
"en":"A method for converting a series of m-bit information words to a modulated signal is described.",
"es":"Se describe un método para convertir una serie de palabras de informacion de bits m a una señal modulada."
}
}
Second type
{
"family_id":10944407,
"index":844,
"translation":{
"el":"αφές ο οποίος παρασκευάζεται με χαρμάνι ελληνικού καφέ είτε σε συσκευή καφέ εσπρέσο είτε σε συσκευή γαλλικού καφέ (φίλτρου) είτε κατά τον παραδοσιακό τρόπο του ελληνικού καφέ και διυλίζεται, κτυπιέται στη συνέχεια με πάγο σε χειροκίνητο ή ηλεκτρικόμίξερ ώστε να παγώσει ομοιόμορφα και να αποκτήσει πλούσιο αφρό και σερβίρεται σε ποτήρι. ΰ",
"en":"offee prepared using the mix for Greek coffee either in an espresso - type coffee making machine, or in a filter coffee making machine or in the traditional way for preparing Greek coffee and is then filtered , shaken with ice manually or with an electric mixer so that it freezes homogeneously, obtains a rich froth and is served in a glass."
}
}
### Data Fields
**index:** position in the corpus
**family id:** for each abstract, such that researchers can use that information for other text mining purposes.
**translation:** distionary containing source and target sentence for that example
### Data Splits
No official train/val/test splits given.
Parallel corpora aligned into sentence level
|Language Pair|# Sentences|# Unique Tokens|
|--------|-----|------|
|EN/ZH|4.9M|155.8M|
|EN/JA|6.1M|189.6M|
|EN/FR|12.2M|455M|
|EN/KO|2.3M|91.4M|
|EN/DE|2.2M|81.7M|
|EN/RU|4.3M|107.3M|
|DE/FR|1.2M|38.8M|
|FR/JA|0.3M|9.9M|
|EN/ES|0.6M|24.6M|
Parallel corpora aligned into abstract level
|Language Pair|# Abstracts|
|--------|-----|
|FR/KO|120,607|
|EN/UK|89,227|
|RU/UK|85,963|
|CS/EN|78,978|
|EN/RO|48,789|
|EN/HU|42,629|
|ES/FR|32,553|
|EN/SK|23,410|
|EN/PT|23,122|
|BG/EN|16,177|
|FR/RU|10,889|
## Dataset Creation
### Curation Rationale
The availability of parallel corpora is required by current Statistical and Neural Machine Translation systems (SMT and NMT). Acquiring a high-quality parallel corpus that is large enough to train MT systems, particularly NMT ones, is not a trivial task due to the need for correct alignment and, in many cases, human curation. In this context, the automated creation of parallel corpora from freely available resources is extremely important in Natural Language Pro- cessing (NLP).
### Source Data
#### Initial Data Collection and Normalization
Google makes patents data available under the Google Cloud Public Datasets. BigQuery is a Google service that supports the efficient storage and querying of massive datasets which are usually a challenging task for usual SQL databases. For instance, filtering the September 2019 release of the dataset, which contains more than 119 million rows, can take less than 1 minute for text fields. The on-demand billing for BigQuery is based on the amount of data processed by each query run, thus for a single query that performs a full-scan, the cost can be over USD 15.00, since the cost per TB is currently USD 5.00.
#### Who are the source language producers?
BigQuery is a Google service that supports the efficient storage and querying of massive datasets which are usually a challenging task for usual SQL databases.
### Annotations
#### Annotation process
The following steps describe the process of producing patent aligned abstracts:
1. Load the nth individual file
2. Remove rows where the number of abstracts with more than one language is less than 2 for a given family id. The family id attribute is used to group patents that refers to the same invention. By removing these rows, we remove abstracts that are available only in one language.
3. From the resulting set, create all possible parallel abstracts from the available languages. For instance, an abstract may be available in English, French and German, thus, the possible language pairs are English/French, English/German, and French/German.
4. Store the parallel patents into an SQL database for easier future handling and sampling.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Funded by Google Tensorflow Research Cloud.
### Licensing Information
CC BY 4.0
### Citation Information
```
@inproceedings{soares-etal-2020-parapat,
title = "{P}ara{P}at: The Multi-Million Sentences Parallel Corpus of Patents Abstracts",
author = "Soares, Felipe and
Stevenson, Mark and
Bartolome, Diego and
Zaretskaya, Anna",
booktitle = "Proceedings of The 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://www.aclweb.org/anthology/2020.lrec-1.465",
pages = "3769--3774",
language = "English",
ISBN = "979-10-95546-34-4",
}
```
[DOI](https://doi.org/10.6084/m9.figshare.12627632)
### Contributions
Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding this dataset. |
parsinlu_reading_comprehension | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- fa
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|wikipedia|google
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: null
pretty_name: PersiNLU (Reading Comprehension)
dataset_info:
features:
- name: question
dtype: string
- name: url
dtype: string
- name: context
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: answer_text
dtype: string
config_name: parsinlu-repo
splits:
- name: train
num_bytes: 747679
num_examples: 600
- name: test
num_bytes: 681945
num_examples: 575
- name: validation
num_bytes: 163185
num_examples: 125
download_size: 4117863
dataset_size: 1592809
---
# Dataset Card for PersiNLU (Reading Comprehension)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/persiannlp/parsinlu/)
- **Repository:** [Github](https://github.com/persiannlp/parsinlu/)
- **Paper:** [Arxiv](https://arxiv.org/abs/2012.06154)
- **Leaderboard:**
- **Point of Contact:** [email](d.khashabi@gmail.com)
### Dataset Summary
A Persian reading comprehenion task (generating an answer, given a question and a context paragraph).
The questions are mined using Google auto-complete, their answers and the corresponding evidence documents are manually annotated by native speakers.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text dataset is in Persian (`fa`).
## Dataset Structure
### Data Instances
Here is an example from the dataset:
```
{
'question': 'پیامبر در چه سالی به پیامبری رسید؟',
'url': 'https://fa.wikipedia.org/wiki/%D9%85%D8%AD%D9%85%D8%AF',
'passage': 'محمد که از روش زندگی مردم مکه ناخشنود بود، گهگاه در غار حرا در یکی از کوه\u200cهای اطراف آن دیار به تفکر و عبادت می\u200cپرداخت. به باور مسلمانان، محمد در همین مکان و در حدود ۴۰ سالگی از طرف خدا به پیامبری برگزیده، و وحی بر او فروفرستاده شد. در نظر آنان، دعوت محمد همانند دعوت دیگر پیامبرانِ کیش یکتاپرستی مبنی بر این بود که خداوند (الله) یکتاست و تسلیم شدن برابر خدا راه رسیدن به اوست.',
'answers': [
{'answer_start': 160, 'answer_text': 'حدود ۴۰ سالگی'}
]
}
```
### Data Fields
- `question`: the question, mined using Google auto-complete.
- `passage`: the passage that contains the answer.
- `url`: the url from which the passage was mined.
- `answers`: a list of answers, containing the string and the index of the answer with the fields `answer_start` and `answer_text`. Note that in the test set, some `answer_start` values are missing and replaced with `-1`
### Data Splits
The train/test split contains 600/575 samples.
## Dataset Creation
### Curation Rationale
The question were collected via Google auto-complete.
The answers were annotated by native speakers.
For more details, check [the corresponding draft](https://arxiv.org/abs/2012.06154).
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC BY-NC-SA 4.0 License
### Citation Information
```bibtex
@article{huggingface:dataset,
title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
year={2020}
journal = {arXiv e-prints},
eprint = {2012.06154},
}
```
### Contributions
Thanks to [@danyaljj](https://github.com/danyaljj) for adding this dataset. |
pass | ---
annotations_creators:
- no-annotation
language_creators:
- machine-generated
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- extended|yffc100M
task_categories:
- other
task_ids: []
paperswithcode_id: pass
pretty_name: Pictures without humAns for Self-Supervision
tags:
- image-self-supervised pretraining
dataset_info:
features:
- name: image
dtype: image
- name: creator_username
dtype: string
- name: hash
dtype: string
- name: gps_latitude
dtype: float32
- name: gps_longitude
dtype: float32
- name: date_taken
dtype: timestamp[us]
splits:
- name: train
num_bytes: 178563446100
num_examples: 1439588
download_size: 179640190811
dataset_size: 178563446100
---
# Dataset Card for PASS
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [PASS homepage](https://www.robots.ox.ac.uk/~vgg/research/pass/)
- **Repository:** [PASS repository](https://github.com/yukimasano/PASS)
- **Paper:** [PASS: An ImageNet replacement for self-supervised pretraining without humans](https://arxiv.org/abs/2109.13228)
- **Leaderboard:** [Pretrained models with scores](https://github.com/yukimasano/PASS#pretrained-models)
- **Point of Contact:** [Yuki M. Asano](mailto:yukiATMARKrobots.ox.ac.uk)
### Dataset Summary
PASS is a large-scale image dataset, containing 1.4 million images, that does not include any humans and which can be used for high-quality pretraining while significantly reducing privacy concerns.
### Supported Tasks and Leaderboards
From the paper:
> **Has the dataset been used for any tasks already?** In the paper we show and benchmark the
intended use of this dataset as a pretraining dataset. For this the dataset is used an unlabelled image collection on which visual features are learned and then transferred to downstream tasks. We show that with this dataset it is possible to learn competitive visual features, without any humans in the pretraining dataset and with complete license information.
> **Is there a repository that links to any or all papers or systems that use the dataset?** We will
be listing these at the repository.
> **What (other) tasks could the dataset be used for?** We believe this dataset might allow researchers and practitioners to further evaluate the differences that pretraining datasets can have on the learned features. Furthermore, since the meta-data is available for the images, it is possible to investigate the effect of image resolution on self-supervised learning methods, a domain largely underresearched thus far, as the current de-facto standard, ImageNet, only comes in one size.
> **Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses?** Given that this dataset is a subset of a dataset that randomly samples images from flickr, the image distribution is biased towards European and American creators. As in the main papers discussion, this can lead to non-generalizeable features, or even biased features as the images taken in other countries might be more likely to further reflect and propagate stereotypes [84], though in our case these do not refer to sterotypes about humans.
> **Are there tasks for which the dataset should not be used?** This dataset is meant for research
purposes only. The dataset should also not be used for, e.g. connecting images and usernames, as
this might risk de-anonymising the dataset in the long term. The usernames are solely provided for
attribution.
### Languages
English.
## Dataset Structure
### Data Instances
A data point comprises an image and its meta-data:
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x375 at 0x7FFAD48E35F8>, 'creator_username': 'NTShieldsy',
'hash': 'e1662344ffa8c231d198c367c692cc',
'gps_latitude': 21.206675,
'gps_longitude': 39.166558,
'date_taken': datetime.datetime(2012, 8, 9, 18, 0, 20)
}
```
### Data Fields
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
- `creator_username`: The photographer.
- `hash`: The hash, as computed from YFCC-100M.
- `gps_latitude`: Latitude of image if existent, otherwise None.
- `gps_longitude`: Longitude of image if existent, otherwise None.
- `date_taken`: Datetime of image if existent, otherwise None.
### Data Splits
All the data is contained in the training set. The training set has 1,439,588 instances as this implementation corresponds to the most recent release (v3) from the [version history](https://github.com/yukimasano/PASS/blob/main/version_history.txt).
From the paper:
> **Are there recommended data splits (e.g., training, development/validation, testing)?** As outlined in the intended usecases, this dataset is meant for pretraining representations. As such, the models derived from training on this dataset need to be evaluated on different datasets, so called down-stream tasks. Thus the recommended split is to use all samples for training.
## Dataset Creation
### Curation Rationale
From the paper:
> **For what purpose was the dataset created?** Neural networks pretrained on large image collections have been shown to transfer well to other visual tasks where there is little labelled data, i.e. transferring a model works better than starting with a randomly initialized network every time for a new task, as many visual features can be repurposed. This dataset has as its goal to provide a safer large-scale dataset for such pretraining of visual features. In particular, this dataset does not contain any humans or human parts and does not contain any labels. The first point is important, as the current standard for pretraining, ImageNet and its face-blurred version only provide pseudo-anonymity and furthermore do not provide correct licences to the creators. The second point is relevant as pretraining is moving towards the self-supervised paradigm, where labels are not required. Yet most methods are developed on the highly curated ImageNet dataset, yielding potentially non-generalizeable research.
### Source Data
#### Initial Data Collection and Normalization
From the paper:
* **Collection process**:
> **How was the data associated with each instance acquired?** The data was collected from the
publicly available dataset YFCC-100M which is hosted on the AWS public datasets platform. We have used the meta-data, namely the copyright information to filter only images with the CC-BY licence and have downloaded these using the aws command line interface, allowing for quick and stable downloading. In addition, all files were subsequently scanned for viruses using Sophos SAVScan virus detection utility, v.5.74.0.
> **What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)?** Our dataset is a subset
of the YFCC-100M dataset. The YFCC-100M dataset itself was created by effectively randomly
selecting publicly available images from flickr, resulting in approximately 98M images.
> **Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set?** The dataset is a sample of a larger set—all possible digital photographs. As outlined in Section 3 we start from an existing dataset, YFCC-100M, and stratify the images (removing images with people and personal information, removing images with harmful content, removing images with unsuitable licenses, each user contributes at most 80 images to the dataset). This leaves 1.6M images, out of which we take a random sample of 1.28M images to replicate the size of the ImageNet dataset. While this dataset can thus be extended, this is the set that we have verified to not contain humans, human parts and disturbing content.
> **Over what timeframe was the data collected?** The images underlying the dataset were downloaded between March and June 2021 from the AWS public datasets’ S3 bucket, following the
download code provided in the repo. However the images contained were originally and taken
anywhere from 2000 to 2015, with the majority being shot between 2010-2014.
* **Preprocessing/cleaning/labeling**:
> **Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing,tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)?** After the download of approx. 17M images, the corrupted, or single-color images were removed from the dataset prior to the generation of the dataset(s) used in the paper. The images were not further preprocessed or edited.
> **Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)?** Yes. The creators of the dataset maintain a copy of the 17M original images with the CC-BY licence of YFCC100M that sits at the start of our dataset creation pipeline. Is the software used to preprocess/clean/label the instances available? We have only used basic Python primitives for this. For the annotations we have used VIA [27, 28].
#### Who are the source language producers?
From the paper:
> **Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)?** As described, the data was collected automatically by simply downloading images from a publicly hosted S3 bucket. The human verification was done using a professional data annotation company that pays 150% of the local minimum wage.
### Annotations
#### Annotation process
This dataset doesn't contain annotations.
#### Who are the annotators?
This dataset doesn't contain annotations.
### Personal and Sensitive Information
From the paper:
> **Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor-patient confidentiality, data that includes the content of individuals’ non-public communications)?** No.
> **Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety?** No. Besides checking for human presence in the images, the annotators were also given the choice of flagging images for disturbing content, which once flagged was removed.
> **Does the dataset relate to people? If not, you may skip the remaining questions in this section.**
No.
> **Does the dataset identify any subpopulations (e.g., by age, gender)?** NA
> **Is it possible to identify individuals (i.e., one or more natural persons), either directly or indirectly (i.e., in combination with other data) from the dataset?** NA
> **Does the dataset contain data that might be considered sensitive in any way (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history)?** NA
> **Were any ethical review processes conducted (e.g., by an institutional review board)?** No
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
From the paper:
> **Is your dataset free of biases?** No. There are many kinds of biases that can either be quantified, e.g. geo-location (most images originate from the US and Europe) or camera-model (most images are taken with professional DSLR cameras not easily affordable), there are likely many more biases that this dataset does contain. The only thing that this dataset does not contain are humans and parts of humans, as far as our validation procedure is accurate.
### Other Known Limitations
From the paper:
> **Can you guarantee compliance to GDPR?** No, we cannot comment on legal issues.
## Additional Information
### Dataset Curators
YM. Asano, C. Rupprecht, A. Zisserman and A. Vedaldi.
From the paper:
> **Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)?** The dataset has been constructed by the research group
“Visual Geometry Group” at the University of Oxford at the Engineering Science Department.
### Licensing Information
The PASS dataset is available to download for commercial/research purposes under a [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/). A complete version of the license can be found [here](https://www.robots.ox.ac.uk/~vgg/research/pass/license_pass.txt). The whole dataset only contains CC-BY licensed images with full attribution information.
### Citation Information
```bibtex
@Article{asano21pass,
author = "Yuki M. Asano and Christian Rupprecht and Andrew Zisserman and Andrea Vedaldi",
title = "PASS: An ImageNet replacement for self-supervised pretraining without humans",
journal = "NeurIPS Track on Datasets and Benchmarks",
year = "2021"
}
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
paws-x | ---
annotations_creators:
- expert-generated
- machine-generated
language_creators:
- expert-generated
- machine-generated
language:
- de
- en
- es
- fr
- ja
- ko
- zh
license:
- other
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-paws
task_categories:
- text-classification
task_ids:
- semantic-similarity-classification
- semantic-similarity-scoring
- text-scoring
- multi-input-text-classification
paperswithcode_id: paws-x
pretty_name: 'PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification'
tags:
- paraphrase-identification
dataset_info:
- config_name: en
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 12215953
num_examples: 49401
- name: test
num_bytes: 494734
num_examples: 2000
- name: validation
num_bytes: 492287
num_examples: 2000
download_size: 30282057
dataset_size: 13202974
- config_name: de
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 12801824
num_examples: 49401
- name: test
num_bytes: 524214
num_examples: 2000
- name: validation
num_bytes: 514009
num_examples: 2000
download_size: 30282057
dataset_size: 13840047
- config_name: es
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 12808486
num_examples: 49401
- name: test
num_bytes: 519111
num_examples: 2000
- name: validation
num_bytes: 513888
num_examples: 2000
download_size: 30282057
dataset_size: 13841485
- config_name: fr
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 13295597
num_examples: 49401
- name: test
num_bytes: 535101
num_examples: 2000
- name: validation
num_bytes: 533031
num_examples: 2000
download_size: 30282057
dataset_size: 14363729
- config_name: ja
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 15041632
num_examples: 49401
- name: test
num_bytes: 668636
num_examples: 2000
- name: validation
num_bytes: 661778
num_examples: 2000
download_size: 30282057
dataset_size: 16372046
- config_name: ko
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 13934221
num_examples: 49401
- name: test
num_bytes: 562300
num_examples: 2000
- name: validation
num_bytes: 554875
num_examples: 2000
download_size: 30282057
dataset_size: 15051396
- config_name: zh
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 10815499
num_examples: 49401
- name: test
num_bytes: 474644
num_examples: 2000
- name: validation
num_bytes: 473118
num_examples: 2000
download_size: 30282057
dataset_size: 11763261
---
# Dataset Card for PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [PAWS-X](https://github.com/google-research-datasets/paws/tree/master/pawsx)
- **Repository:** [PAWS-X](https://github.com/google-research-datasets/paws/tree/master/pawsx)
- **Paper:** [PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification](https://arxiv.org/abs/1908.11828)
- **Point of Contact:** [Yinfei Yang](yinfeiy@google.com)
### Dataset Summary
This dataset contains 23,659 **human** translated PAWS evaluation pairs and
296,406 **machine** translated training pairs in six typologically distinct
languages: French, Spanish, German, Chinese, Japanese, and Korean. All
translated pairs are sourced from examples in
[PAWS-Wiki](https://github.com/google-research-datasets/paws#paws-wiki).
For further details, see the accompanying paper:
[PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase
Identification](https://arxiv.org/abs/1908.11828)
### Supported Tasks and Leaderboards
It has been majorly used for paraphrase identification for English and other 6 languages namely French, Spanish, German, Chinese, Japanese, and Korean
### Languages
The dataset is in English, French, Spanish, German, Chinese, Japanese, and Korean
## Dataset Structure
### Data Instances
For en:
```
id : 1
sentence1 : In Paris , in October 1560 , he secretly met the English ambassador , Nicolas Throckmorton , asking him for a passport to return to England through Scotland .
sentence2 : In October 1560 , he secretly met with the English ambassador , Nicolas Throckmorton , in Paris , and asked him for a passport to return to Scotland through England .
label : 0
```
For fr:
```
id : 1
sentence1 : À Paris, en octobre 1560, il rencontra secrètement l'ambassadeur d'Angleterre, Nicolas Throckmorton, lui demandant un passeport pour retourner en Angleterre en passant par l'Écosse.
sentence2 : En octobre 1560, il rencontra secrètement l'ambassadeur d'Angleterre, Nicolas Throckmorton, à Paris, et lui demanda un passeport pour retourner en Écosse par l'Angleterre.
label : 0
```
### Data Fields
All files are in tsv format with four columns:
Column Name | Data
:---------- | :--------------------------------------------------------
id | An ID that matches the ID of the source pair in PAWS-Wiki
sentence1 | The first sentence
sentence2 | The second sentence
label | Label for each pair
The source text of each translation can be retrieved by looking up the ID in the
corresponding file in PAWS-Wiki.
### Data Splits
The numbers of examples for each of the seven languages are shown below:
Language | Train | Dev | Test
:------- | ------: | -----: | -----:
en | 49,401 | 2,000 | 2,000
fr | 49,401 | 2,000 | 2,000
es | 49,401 | 2,000 | 2,000
de | 49,401 | 2,000 | 2,000
zh | 49,401 | 2,000 | 2,000
ja | 49,401 | 2,000 | 2,000
ko | 49,401 | 2,000 | 2,000
> **Caveat**: please note that the dev and test sets of PAWS-X are both sourced
> from the dev set of PAWS-Wiki. As a consequence, the same `sentence 1` may
> appear in both the dev and test sets. Nevertheless our data split guarantees
> that there is no overlap on sentence pairs (`sentence 1` + `sentence 2`)
> between dev and test.
## Dataset Creation
### Curation Rationale
Most existing work on adversarial data generation focuses on English. For example, PAWS (Paraphrase Adversaries from Word Scrambling) (Zhang et al., 2019) consists of challenging English paraphrase identification pairs from Wikipedia and Quora. They remedy this gap with PAWS-X, a new dataset of 23,659 human translated PAWS evaluation pairs in six typologically distinct languages: French, Spanish, German, Chinese, Japanese, and Korean. They provide baseline numbers for three models with different capacity to capture non-local context and sentence structure, and using different multilingual training and evaluation regimes. Multilingual BERT (Devlin et al., 2019) fine-tuned on PAWS English plus machine-translated data performs the best, with a range of 83.1-90.8 accuracy across the non-English languages and an average accuracy gain of 23% over the next best model. PAWS-X shows the effectiveness of deep, multilingual pre-training while also leaving considerable headroom as a new challenge to drive multilingual research that better captures structure and contextual information.
### Source Data
PAWS (Paraphrase Adversaries from Word Scrambling)
#### Initial Data Collection and Normalization
All translated pairs are sourced from examples in [PAWS-Wiki](https://github.com/google-research-datasets/paws#paws-wiki)
#### Who are the source language producers?
This dataset contains 23,659 human translated PAWS evaluation pairs and 296,406 machine translated training pairs in six typologically distinct languages: French, Spanish, German, Chinese, Japanese, and Korean.
### Annotations
#### Annotation process
If applicable, describe the annotation process and any tools used, or state otherwise. Describe the amount of data annotated, if not all. Describe or reference annotation guidelines provided to the annotators. If available, provide interannotator statistics. Describe any annotation validation processes.
#### Who are the annotators?
The paper mentions the translate team, especially Mengmeng Niu, for the help with the annotations.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.
### Licensing Information
The dataset may be freely used for any purpose, although acknowledgement of Google LLC ("Google") as the data source would be appreciated. The dataset is provided "AS IS" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset.
### Citation Information
```
@InProceedings{pawsx2019emnlp,
title = {{PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification}},
author = {Yang, Yinfei and Zhang, Yuan and Tar, Chris and Baldridge, Jason},
booktitle = {Proc. of EMNLP},
year = {2019}
}
```
### Contributions
Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik), [@gowtham1997](https://github.com/gowtham1997) for adding this dataset. |
paws | ---
annotations_creators:
- expert-generated
- machine-generated
language_creators:
- machine-generated
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- semantic-similarity-classification
- semantic-similarity-scoring
- text-scoring
- multi-input-text-classification
paperswithcode_id: paws
pretty_name: 'PAWS: Paraphrase Adversaries from Word Scrambling'
configs:
- labeled_final
- labeled_swap
- unlabeled_final
tags:
- paraphrase-identification
dataset_info:
- config_name: labeled_final
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 12239978
num_examples: 49401
- name: test
num_bytes: 1987802
num_examples: 8000
- name: validation
num_bytes: 1975870
num_examples: 8000
download_size: 4687157
dataset_size: 16203650
- config_name: labeled_swap
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 7963651
num_examples: 30397
download_size: 2257283
dataset_size: 7963651
- config_name: unlabeled_final
features:
- name: id
dtype: int32
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 157806996
num_examples: 645652
- name: validation
num_bytes: 2442173
num_examples: 10000
download_size: 47393331
dataset_size: 160249169
---
# Dataset Card for PAWS: Paraphrase Adversaries from Word Scrambling
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [PAWS](https://github.com/google-research-datasets/paws)
- **Repository:** [PAWS](https://github.com/google-research-datasets/paws)
- **Paper:** [PAWS: Paraphrase Adversaries from Word Scrambling](https://arxiv.org/abs/1904.01130)
- **Point of Contact:** [Yuan Zhang](zhangyua@google.com)
### Dataset Summary
PAWS: Paraphrase Adversaries from Word Scrambling
This dataset contains 108,463 human-labeled and 656k noisily labeled pairs that feature the importance of modeling structure, context, and word order information for the problem of paraphrase identification. The dataset has two subsets, one based on Wikipedia and the other one based on the Quora Question Pairs (QQP) dataset.
For further details, see the accompanying paper: PAWS: Paraphrase Adversaries from Word Scrambling (https://arxiv.org/abs/1904.01130)
PAWS-QQP is not available due to license of QQP. It must be reconstructed by downloading the original data and then running our scripts to produce the data and attach the labels.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
Below are two examples from the dataset:
| | Sentence 1 | Sentence 2 | Label |
| :-- | :---------------------------- | :---------------------------- | :---- |
| (1) | Although interchangeable, the body pieces on the 2 cars are not similar. | Although similar, the body parts are not interchangeable on the 2 cars. | 0 |
| (2) | Katz was born in Sweden in 1947 and moved to New York City at the age of 1. | Katz was born in 1947 in Sweden and moved to New York at the age of one. | 1 |
The first pair has different semantic meaning while the second pair is a paraphrase. State-of-the-art models trained on existing datasets have dismal performance on PAWS (<40% accuracy); however, including PAWS training data for these models improves their accuracy to 85% while maintaining performance on existing datasets such as the [Quora Question Pairs](https://data.quora.com/First-Quora-Dataset-Release-Question-Pairs).
### Data Fields
This corpus contains pairs generated from Wikipedia pages, and can be downloaded
here:
* **PAWS-Wiki Labeled (Final)**: containing pairs that are generated from both word swapping and back translation methods. All pairs have human judgements on both paraphrasing and fluency and they are split into Train/Dev/Test sections.
* **PAWS-Wiki Labeled (Swap-only)**: containing pairs that have no back translation counterparts and therefore they are not included in the first set. Nevertheless, they are high-quality pairs with human judgements on both paraphrasing and fluency, and they can be included as an auxiliary training set.
* **PAWS-Wiki Unlabeled (Final)**: Pairs in this set have noisy labels without human judgments and can also be used as an auxiliary training set. They are generated from both word swapping and back translation methods.
All files are in the tsv format with four columns:
Column Name | Data
:------------ | :--------------------------
id | A unique id for each pair
sentence1 | The first sentence
sentence2 | The second sentence
(noisy_)label | (Noisy) label for each pair
Each label has two possible values: `0` indicates the pair has different meaning, while `1` indicates the pair is a paraphrase.
### Data Splits
The number of examples and the proportion of paraphrase (Yes%) pairs are shown
below:
Data | Train | Dev | Test | Yes%
:------------------ | ------: | -----: | ----: | ----:
Labeled (Final) | 49,401 | 8,000 | 8,000 | 44.2%
Labeled (Swap-only) | 30,397 | -- | -- | 9.6%
Unlabeled (Final) | 645,652 | 10,000 | -- | 50.0%
## Dataset Creation
### Curation Rationale
Existing paraphrase identification datasets lack sentence pairs that have high lexical overlap without being paraphrases. Models trained on such data fail to distinguish pairs like *flights from New York to Florida* and *flights from Florida to New York*.
### Source Data
#### Initial Data Collection and Normalization
Their automatic generation method is based on two ideas. The first swaps words to generate a sentence pair with the same BOW, controlled by a language model. The second uses back translation to generate paraphrases with high BOW overlap but different word order. These two strategies generate high-quality, diverse PAWS pairs, balanced evenly between paraphrases and non-paraphrases.
#### Who are the source language producers?
Mentioned above.
### Annotations
#### Annotation process
Sentence pairs are presented to five annotators, each of which gives a binary judgment as to whether they are paraphrases or not. They chose binary judgments to make dataset have the same label schema as the QQP corpus. Overall, human agreement is high on both Quora (92.0%) and Wikipedia (94.7%) and each label only takes about 24 seconds. As such, answers are usually straight-forward to human raters.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.
### Licensing Information
The dataset may be freely used for any purpose, although acknowledgement of Google LLC ("Google") as the data source would be appreciated. The dataset is provided "AS IS" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset.
### Citation Information
```
@InProceedings{paws2019naacl,
title = {{PAWS: Paraphrase Adversaries from Word Scrambling}},
author = {Zhang, Yuan and Baldridge, Jason and He, Luheng},
booktitle = {Proc. of NAACL},
year = {2019}
}
```
### Contributions
Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding this dataset. |
pec | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- gpl-3.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
- text-retrieval
task_ids:
- dialogue-modeling
- utterance-retrieval
paperswithcode_id: pec
pretty_name: Persona-Based Empathetic Conversational
configs:
- all
- happy
- offmychest
dataset_info:
- config_name: happy
features:
- name: personas
sequence: string
- name: context
sequence: string
- name: context_speakers
sequence: string
- name: response
dtype: string
- name: response_speaker
dtype: string
splits:
- name: train
num_bytes: 643196978
num_examples: 157195
- name: test
num_bytes: 92003042
num_examples: 22730
- name: validation
num_bytes: 81132088
num_examples: 19829
download_size: 252434681
dataset_size: 816332108
- config_name: offmychest
features:
- name: personas
sequence: string
- name: context
sequence: string
- name: context_speakers
sequence: string
- name: response
dtype: string
- name: response_speaker
dtype: string
splits:
- name: train
num_bytes: 518616402
num_examples: 123968
- name: test
num_bytes: 64173390
num_examples: 15324
- name: validation
num_bytes: 66675909
num_examples: 16004
download_size: 252434681
dataset_size: 649465701
- config_name: all
features:
- name: personas
sequence: string
- name: context
sequence: string
- name: context_speakers
sequence: string
- name: response
dtype: string
- name: response_speaker
dtype: string
splits:
- name: train
num_bytes: 1162655628
num_examples: 281163
- name: test
num_bytes: 156310498
num_examples: 38054
- name: validation
num_bytes: 147940164
num_examples: 35833
download_size: 252434681
dataset_size: 1466906290
---
# Dataset Card for PEC
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [PEC repository](https://github.com/zhongpeixiang/PEC)
- **Paper:** [Towards Persona-Based Empathetic Conversational Models](https://www.aclweb.org/anthology/2020.emnlp-main.531/)
- **Point of Contact:** [Peixiang Zhong](mailto:zhongpeixiang@gmail.com)
### Dataset Summary
The PEC dataset is an English-language dataset of open-domain conversations gathered from two subreddits on Reddit, i.e., happy and offmychest. PEC has around 350K persona-based empathetic conversations. Each utterance is associated with a speaker, and each speaker has a persona of multiple persona sentences. The conversations in PEC are more empathetic than casual conversations. The conversations in the happy domain are mostly positive, whereas the conversations in the offmychest domain are mostly negative.
### Supported Tasks and Leaderboards
- `dialogue-modeling`, `utterance-retrieval`: this dataset can be used to train a generative or retrieval-based conversational model.
### Languages
English
## Dataset Structure
### Data Instances
A typical data example comprises a list of context utterances, a list of context speakers, a response to the context, the response speaker and the persona of the response speaker.
An example from PEC looks as follows:
```
{'context': ['found out this morning i got a job promotion ! ! !'],
'context_speakers': ['HeWentToJared91'],
'personas': [
"i ca n't stand working in the ugli .",
'i ’ve always liked my eyes except for the fact that they ca n’t shoot lasers',
'i feel really bad about myself as a person right now , and i could really use a hand .',
'i drank a coffee , and it just made me feel even more exhausted .',
'i want a natsuki t shirt',
"i 've dealt with depression in the past .",
'i love red dead 2'],
'response': "you look like a nice person ! we 're proud of you , and i bet you earned that promotion !",
'response_speaker': 'tylock'}
```
### Data Fields
- `context`: a list of strings, each string denotes a context utterance.
- `context_speakers`: a list of strings, each string denotes a speaker.
- `response`: a string denoting the response to the `context`.
- `response_speaker`: a string denoting the speaker of `response`.
- `personas`: a list of strings, each string denotes a persona sentence of `response_speaker`.
### Data Splits
The data is split into a training, validation and test set for each of the three domains. Note that the *all* domain is the concatenation of the *happy* and *offmychest* domains.
| domain | train | validation | test |
|------------|-------:|-----------:|------:|
| happy | 157195 | 19829 | 22730 |
| offmychest | 123968 | 16004 | 15324 |
| all | 281163 | 35833 | 38054 |
## Dataset Creation
### Curation Rationale
PEC was built to provide a testbed for machines to learn persona-based empathetic responding. In our empirical analysis, we found that different personas have different styles of empathetic responding. This dataset can also be used to investigate the link between persona and empathy in human conversations. According to our human assessment, the conversations on the happy and offmychest subreddits are significantly more empathetic than casual conversations.
### Source Data
#### Initial Data Collection and Normalization
The data was obtained via the [pushshift API](https://pushshift.io/using-bigquery-with-reddit-data/) via Google BigQuery.
#### Who are the source language producers?
The language producers are users of the [r/happy](https://www.reddit.com/r/happy/), and [r/offmychest](https://www.reddit.com/r/offmychest/) subreddits between 2012 and 2020. No further demographic information was available from the data source.
### Annotations
#### Annotation process
The dataset does not contain any additional annotations.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
The dataset includes the speaker IDs of users on *happy* and *offmychest* subreddits.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop more personalised and empathetic conversational systems, which is an important milestone towards truly human-like conversational agents.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
A small portion of the dataset has the issues of sexism, hate, and harassment. The persona sentences are noisy.
## Additional Information
### Dataset Curators
The dataset was initially created by Peixiang Zhong, Chen Zhang, Hao Wang, Yong Liu, and Chunyan Miao, jointly done at Nanyang Technological University and Alibaba Group.
### Licensing Information
The licensing status of the dataset hinges on the legal status of the [Pushshift.io](https://files.pushshift.io/reddit/) data which is unclear.
### Citation Information
```
@inproceedings{zhong-etal-2020-towards,
title = "Towards Persona-Based Empathetic Conversational Models",
author = "Zhong, Peixiang and
Zhang, Chen and
Wang, Hao and
Liu, Yong and
Miao, Chunyan",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
year = "2020",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.531",
pages = "6556--6566"
}
```
### Contributions
Thanks to [@zhongpeixiang](https://github.com/zhongpeixiang) for adding this dataset. |
allenai/peer_read | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids: []
paperswithcode_id: peerread
pretty_name: PeerRead
tags:
- acceptability-classification
dataset_info:
- config_name: parsed_pdfs
features:
- name: name
dtype: string
- name: metadata
struct:
- name: source
dtype: string
- name: title
dtype: string
- name: authors
sequence: string
- name: emails
sequence: string
- name: sections
sequence:
- name: heading
dtype: string
- name: text
dtype: string
- name: references
sequence:
- name: title
dtype: string
- name: author
sequence: string
- name: venue
dtype: string
- name: citeRegEx
dtype: string
- name: shortCiteRegEx
dtype: string
- name: year
dtype: int32
- name: referenceMentions
sequence:
- name: referenceID
dtype: int32
- name: context
dtype: string
- name: startOffset
dtype: int32
- name: endOffset
dtype: int32
- name: year
dtype: int32
- name: abstractText
dtype: string
- name: creator
dtype: string
splits:
- name: train
num_bytes: 571263679
num_examples: 11090
- name: test
num_bytes: 34284777
num_examples: 637
- name: validation
num_bytes: 32488519
num_examples: 637
download_size: 1246688292
dataset_size: 638036975
- config_name: reviews
features:
- name: id
dtype: string
- name: conference
dtype: string
- name: comments
dtype: string
- name: subjects
dtype: string
- name: version
dtype: string
- name: date_of_submission
dtype: string
- name: title
dtype: string
- name: authors
sequence: string
- name: accepted
dtype: bool
- name: abstract
dtype: string
- name: histories
sequence:
sequence: string
- name: reviews
sequence:
- name: date
dtype: string
- name: title
dtype: string
- name: other_keys
dtype: string
- name: originality
dtype: string
- name: comments
dtype: string
- name: is_meta_review
dtype: bool
- name: is_annotated
dtype: bool
- name: recommendation
dtype: string
- name: replicability
dtype: string
- name: presentation_format
dtype: string
- name: clarity
dtype: string
- name: meaningful_comparison
dtype: string
- name: substance
dtype: string
- name: reviewer_confidence
dtype: string
- name: soundness_correctness
dtype: string
- name: appropriateness
dtype: string
- name: impact
dtype: string
splits:
- name: train
num_bytes: 15234922
num_examples: 11090
- name: test
num_bytes: 878906
num_examples: 637
- name: validation
num_bytes: 864799
num_examples: 637
download_size: 1246688292
dataset_size: 16978627
---
# Dataset Card for peer_read
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://arxiv.org/abs/1804.09635
- **Repository:** https://github.com/allenai/PeerRead
- **Paper:** https://arxiv.org/pdf/1804.09635.pdf
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
PearRead is a dataset of scientific peer reviews available to help researchers study this important artifact. The dataset consists of over 14K paper drafts and the corresponding accept/reject decisions in top-tier venues including ACL, NIPS and ICLR, as well as over 10K textual peer reviews written by experts for a subset of the papers.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
en-English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
#### parsed_pdfs
- `name`: `string` Filename in the dataset
- `metadata`: `dict` Paper metadata
- `source`: `string` Paper source
- `authors`: `list<string>` List of paper authors
- `title`: `string` Paper title
- `sections`: `list<dict>` List of section heading and corresponding description
- `heading`: `string` Section heading
- `text`: `string` Section description
- `references`: `string` List of references
- `title`: `string` Title of reference paper
- `author`: `list<string>` List of reference paper authors
- `venue`: `string` Reference venue
- `citeRegEx`: `string` Reference citeRegEx
- `shortCiteRegEx`: `string` Reference shortCiteRegEx
- `year`: `int` Reference publish year
- `referenceMentions`: `list<string>` List of reference mentions
- `referenceID`: `int` Reference mention ID
- `context`: `string` Reference mention context
- `startOffset`: `int` Reference startOffset
- `endOffset`: `int` Reference endOffset
- `year`: `int` Paper publish year
- `abstractText`: `string` Paper abstract
- `creator`: `string` Paper creator
#### reviews
- `id`: `int` Review ID
- `conference`: `string` Conference name
- `comments`: `string` Review comments
- `subjects`: `string` Review subjects
- `version`: `string` Review version
- `date_of_submission`: `string` Submission date
- `title`: `string` Paper title
- `authors`: `list<string>` List of paper authors
- `accepted`: `bool` Paper accepted flag
- `abstract`: `string` Paper abstract
- `histories`: `list<string>` Paper details with link
- `reviews`: `dict` Paper reviews
- `date`: `string` Date of review
- `title`: `string` Paper title
- `other_keys`: `string` Reviewer other details
- `originality`: `string` Originality score
- `comments`: `string` Reviewer comments
- `is_meta_review`: `bool` Review type flag
- `recommendation`: `string` Reviewer recommendation
- `replicability`: `string` Replicability score
- `presentation_format`: `string` Presentation type
- `clarity`: `string` Clarity score
- `meaningful_comparison`: `string` Meaningful comparison score
- `substance`: `string` Substance score
- `reviewer_confidence`: `string` Reviewer confidence score
- `soundness_correctness`: `string` Soundness correctness score
- `appropriateness`: `string` Appropriateness score
- `impact`: `string` Impact score
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Dongyeop Kang, Waleed Ammar, Bhavana Dalvi Mishra, Madeleine van Zuylen, Sebastian Kohlmeier, Eduard Hovy, Roy Schwartz
### Licensing Information
[More Information Needed]
### Citation Information
@inproceedings{kang18naacl,
title = {A Dataset of Peer Reviews (PeerRead): Collection, Insights and NLP Applications},
author = {Dongyeop Kang and Waleed Ammar and Bhavana Dalvi and Madeleine van Zuylen and Sebastian Kohlmeier and Eduard Hovy and Roy Schwartz},
booktitle = {Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL)},
address = {New Orleans, USA},
month = {June},
url = {https://arxiv.org/abs/1804.09635},
year = {2018}
}
### Contributions
Thanks to [@vinaykudari](https://github.com/vinaykudari) for adding this dataset. |
peoples_daily_ner | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- zh
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: People's Daily NER
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
config_name: peoples_daily_ner
splits:
- name: train
num_bytes: 14972456
num_examples: 20865
- name: validation
num_bytes: 1676741
num_examples: 2319
- name: test
num_bytes: 3346975
num_examples: 4637
download_size: 8385672
dataset_size: 19996172
---
# Dataset Card for People's Daily NER
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/OYE93/Chinese-NLP-Corpus/tree/master/NER/People's%20Daily)
- **Repository:** [Github](https://github.com/OYE93/Chinese-NLP-Corpus/)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
No citation available for this dataset.
### Contributions
Thanks to [@JetRunner](https://github.com/JetRunner) for adding this dataset. |
per_sent | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|other-MPQA-KBP Challenge-MediaRank
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: persent
pretty_name: PerSenT
dataset_info:
features:
- name: DOCUMENT_INDEX
dtype: int64
- name: TITLE
dtype: string
- name: TARGET_ENTITY
dtype: string
- name: DOCUMENT
dtype: string
- name: MASKED_DOCUMENT
dtype: string
- name: TRUE_SENTIMENT
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph0
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph1
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph2
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph3
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph4
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph5
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph6
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph7
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph8
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph9
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph10
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph11
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph12
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph13
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph14
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
- name: Paragraph15
dtype:
class_label:
names:
'0': Negative
'1': Neutral
'2': Positive
splits:
- name: train
num_bytes: 14595163
num_examples: 3355
- name: test_random
num_bytes: 2629500
num_examples: 579
- name: test_fixed
num_bytes: 3881800
num_examples: 827
- name: validation
num_bytes: 2322922
num_examples: 578
download_size: 23117196
dataset_size: 23429385
---
# Dataset Card for PerSenT
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [PerSenT](https://stonybrooknlp.github.io/PerSenT/)
- **Repository:** [https://github.com/MHDBST/PerSenT](https://github.com/MHDBST/PerSenT)
- **Paper:** [arXiv](https://arxiv.org/abs/2011.06128)
- **Leaderboard:** NA
- **Point of Contact:** [Mohaddeseh Bastan](mbastan@cs.stonybrook.edu)
### Dataset Summary
PerSenT is a crowd-sourced dataset that captures the sentiment of an author towards the main entity in a news article. This dataset contains annotations for 5.3k documents and 38k paragraphs covering 3.2k unique entities. For each article, annotators judge what the author’s sentiment is towards the main
(target) entity of the article. The annotations also include similar judgments on paragraphs within the article.
### Supported Tasks and Leaderboards
Sentiment Classification: Each document consists of multiple paragraphs. Each paragraph is labeled separately (Positive, Neutral, Negative) and the author’s sentiment towards the whole document is included as a document-level label.
### Languages
English
## Dataset Structure
### Data Instances
```json
{'DOCUMENT': "Germany's Landesbank Baden Wuertemberg won EU approval Tuesday for a state bailout after it promised to shrink its balance sheet by 40 percent and refocus on lending to companies.\n The bank was several state-owned German institutions to run into trouble last year after it ran up more huge losses from investing in high-risk proprietary trading and capital market activities -- a business the EU has now told it to shun.\n Seven current and former managers of the bank are also being investigated by German authorities for risking or damaging the bank's capital by carrying out or failing to block investments in high-risk deals worth hundreds of millions from 2006.\n The European Commission said its Tuesday approval for the state rescue of the bank and its new restructuring plan would allow it become a viable business again -- and that the cutbacks would help limit the unfair advantage over rivals that the bank would get from the state aid.\n Stuttgart-based LBBW earlier this year received a capital injection of (EURO)5 billion from the bank's shareholders all of them public authorities or state-owned including the state of Baden-Wuerttemberg the region's savings bank association and the city of Stuttgart.",
'DOCUMENT_INDEX': 1,
'MASKED_DOCUMENT': "[TGT] won EU approval Tuesday for a state bailout after it promised to shrink its balance sheet by 40 percent and refocus on lending to companies.\n [TGT] was several state-owned German institutions to run into trouble last year after [TGT] ran up more huge losses from investing in high-risk proprietary trading and capital market activities -- a business the EU has now told it to shun.\n Seven current and former managers of [TGT] are also being investigated by German authorities for risking or damaging [TGT]'s capital by carrying out or failing to block investments in high-risk deals worth hundreds of millions from 2006.\n The European Commission said its Tuesday approval for the state rescue of [TGT] and its new restructuring plan would allow it become a viable business again -- and that the cutbacks would help limit the unfair advantage over rivals that [TGT] would get from the state aid.\n Stuttgart-based LBBW earlier this year received a capital injection of (EURO)5 billion from [TGT]'s shareholders all of them public authorities or state-owned including the state of Baden-Wuerttemberg the region's savings bank association and the city of Stuttgart.",
'Paragraph0': 2,
'Paragraph1': 0,
'Paragraph10': -1,
'Paragraph11': -1,
'Paragraph12': -1,
'Paragraph13': -1,
'Paragraph14': -1,
'Paragraph15': -1,
'Paragraph2': 0,
'Paragraph3': 1,
'Paragraph4': 1,
'Paragraph5': -1,
'Paragraph6': -1,
'Paragraph7': -1,
'Paragraph8': -1,
'Paragraph9': -1,
'TARGET_ENTITY': 'Landesbank Baden Wuertemberg',
'TITLE': 'German bank LBBW wins EU bailout approval',
'TRUE_SENTIMENT': 0}
```
### Data Fields
- DOCUMENT_INDEX: ID of the document per original dataset
- TITLE: Title of the article
- DOCUMENT: Text of the article
- MASKED_DOCUMENT: Text of the article with the target entity masked with `[TGT]` token
- TARGET_ENTITY: The entity that the author is expressing opinion about
- TRUE_SENTIMENT: Label for entire article
- Paragraph{0..15}: Label for each paragraph in the article
**Note**: Labels are one of `[Negative, Neutral, Positive]`. Missing labels were replaced with `-1`.
### Data Splits
To split the dataset, entities were split into 4 mutually exclusive sets. Due to the nature of news collections, some entities tend to dominate the collection. In the collection, there were four entities which were the main entity in nearly 800 articles. To avoid these entities from dominating the train or test splits, these were moved them to a separate test collection. The remaining was split into a training, dev, and test sets at random. Thus the collection includes one standard test set consisting of articles drawn at random (Test Standard), while the other is a test set which contains multiple articles about a small number of popular entities (Test Frequent).
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Articles were selected from 3 sources:
1. MPQA (Deng and Wiebe, 2015; Wiebe et al., 2005): This dataset contains news articles manually annotated for opinions, beliefs, emotions, sentiments, speculations, etc. It also has target annotations which are entities and event anchored to the heads of noun or verb phrases. All decisions on this dataset are made on sentence-level and over short spans.
2. KBP Challenge (Ellis et al., 2014): This resource contains TAC 2014 KBP English sentiment slot filling challenge dataset. This is a document-level sentiment filling dataset. In this task, given an entity and a sentiment (positive/negative) from the document, the goal is to find entities toward which
the original entity holds the given sentimental view. We selected documents from this resource which have been used in the following similar work in sentiment analysis task (Choi et al., 2016).
3. Media Rank (Ye and Skiena, 2019): This dataset ranks about 50k news sources along different aspects. It is also used for classifying political ideology of news articles (Kulkarni et al., 2018).
Pre-processing steps:
- First we find all the person entities in each article, using Stanford NER (Name Entity Resolution) tagger (Finkel et al., 2005) and all mentions of them using co-reference resolution (Clark and Manning, 2016; Co, 2017).
- We removed articles which are not likely to have a main entity of focus. We used a simple heuristic of removing articles in which the most frequent person entity is mentioned only three times or less (even when counting co-referent mentions).
- For the articles that remain we deemed the most frequent entity to be the main entity of the article. We also filtered out extremely long and extremely short articles to keep the articles which have at least 3 paragraphs and at most 16 paragraphs.
Documents are randomly separated into train, dev, and two test sets. We ensure that each entity appears in only one of the sets. Our goal here is to avoid easy to learn biases over entities. To avoid the most frequent entities from dominating the training or the test sets, we remove articles that covered the most frequent entities and use them as a separate test set (referred to as frequent test set) in addition to the randomly drawn standard test set.
### Annotations
#### Annotation process
We obtained document and paragraph level annotations with the help of Amazon Mechanical Turk workers. The workers first verified if the target entity we provide is indeed the main entity in the document. Then, they rated each paragraph in a document that contained a direct mention or a reference to the target
entity. Last, they rated the sentiment towards the entity based on the entire document. In both cases, the workers made assessments about the authors view based on what they said about the target entity. For both paragraph and document level sentiment, the workers chose from five rating categories: Negative,
Slightly Negative, Neutral, Slightly Positive, or Positive. We then combine the fine-grained annotations to obtain three coarse-grained classes Negative, Neutral, or Positive.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
[More Information Needed]
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@inproceedings{bastan2020authors,
title={Author's Sentiment Prediction},
author={Mohaddeseh Bastan and Mahnaz Koupaee and Youngseo Son and Richard Sicoli and Niranjan Balasubramanian},
year={2020},
eprint={2011.06128},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@jeromeku](https://github.com/jeromeku) for adding this dataset. |
persian_ner | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- fa
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
pretty_name: Persian NER
dataset_info:
- config_name: fold1
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': I-event
'2': I-fac
'3': I-loc
'4': I-org
'5': I-pers
'6': I-pro
'7': B-event
'8': B-fac
'9': B-loc
'10': B-org
'11': B-pers
'12': B-pro
splits:
- name: train
num_bytes: 3362102
num_examples: 5121
- name: test
num_bytes: 1646481
num_examples: 2560
download_size: 1931170
dataset_size: 5008583
- config_name: fold2
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': I-event
'2': I-fac
'3': I-loc
'4': I-org
'5': I-pers
'6': I-pro
'7': B-event
'8': B-fac
'9': B-loc
'10': B-org
'11': B-pers
'12': B-pro
splits:
- name: train
num_bytes: 3344561
num_examples: 5120
- name: test
num_bytes: 1664022
num_examples: 2561
download_size: 1931170
dataset_size: 5008583
- config_name: fold3
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': I-event
'2': I-fac
'3': I-loc
'4': I-org
'5': I-pers
'6': I-pro
'7': B-event
'8': B-fac
'9': B-loc
'10': B-org
'11': B-pers
'12': B-pro
splits:
- name: train
num_bytes: 3310491
num_examples: 5121
- name: test
num_bytes: 1698092
num_examples: 2560
download_size: 1931170
dataset_size: 5008583
---
# Dataset Card for [Persian NER]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/HaniehP/PersianNER)
- **Repository:** [Github](https://github.com/HaniehP/PersianNER)
- **Paper:** [Aclweb](https://www.aclweb.org/anthology/C16-1319)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The dataset includes 7,682 Persian sentences, split into 250,015 tokens and their NER labels. It is available in 3 folds to be used in turn as training and test sets. The NER tags are in IOB format.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
### Data Fields
- `id`: id of the sample
- `tokens`: the tokens of the example text
- `ner_tags`: the NER tags of each token
The NER tags correspond to this list:
```
"O", "I-event", "I-fac", "I-loc", "I-org", "I-pers", "I-pro", "B-event", "B-fac", "B-loc", "B-org", "B-pers", "B-pro"
```
### Data Splits
Training and test splits
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
Hanieh Poostchi, Ehsan Zare Borzeshi, Mohammad Abdous, Massimo Piccardi
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
Hanieh Poostchi, Ehsan Zare Borzeshi, Mohammad Abdous, Massimo Piccardi
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
Dataset is published for academic use only
### Dataset Curators
[More Information Needed]
### Licensing Information
Creative Commons Attribution 4.0 International License.
### Citation Information
@inproceedings{poostchi-etal-2016-personer,
title = "{P}erso{NER}: {P}ersian Named-Entity Recognition",
author = "Poostchi, Hanieh and
Zare Borzeshi, Ehsan and
Abdous, Mohammad and
Piccardi, Massimo",
booktitle = "Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
month = dec,
year = "2016",
address = "Osaka, Japan",
publisher = "The COLING 2016 Organizing Committee",
url = "https://www.aclweb.org/anthology/C16-1319",
pages = "3381--3389",
abstract = "Named-Entity Recognition (NER) is still a challenging task for languages with low digital resources. The main difficulties arise from the scarcity of annotated corpora and the consequent problematic training of an effective NER pipeline. To abridge this gap, in this paper we target the Persian language that is spoken by a population of over a hundred million people world-wide. We first present and provide ArmanPerosNERCorpus, the first manually-annotated Persian NER corpus. Then, we introduce PersoNER, an NER pipeline for Persian that leverages a word embedding and a sequential max-margin classifier. The experimental results show that the proposed approach is capable of achieving interesting MUC7 and CoNNL scores while outperforming two alternatives based on a CRF and a recurrent neural network.",
}
### Contributions
Thanks to [@KMFODA](https://github.com/KMFODA) for adding this dataset. |
pg19 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-generation
task_ids:
- language-modeling
paperswithcode_id: pg-19
pretty_name: PG-19
dataset_info:
features:
- name: short_book_title
dtype: string
- name: publication_date
dtype: int32
- name: url
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 11453688524
num_examples: 28602
- name: validation
num_bytes: 17402307
num_examples: 50
- name: test
num_bytes: 40482864
num_examples: 100
download_size: 11740484131
dataset_size: 11511573695
---
# Dataset Card for "pg19"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/deepmind/pg19](https://github.com/deepmind/pg19)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [Compressive Transformers for Long-Range Sequence Modelling](https://arxiv.org/abs/1911.05507)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 11.74 GB
- **Size of the generated dataset:** 11.51 GB
- **Total amount of disk used:** 23.25 GB
### Dataset Summary
This repository contains the PG-19 language modeling benchmark.
It includes a set of books extracted from the Project Gutenberg books library, that were published before 1919.
It also contains metadata of book titles and publication dates.
PG-19 is over double the size of the Billion Word benchmark and contains documents that are 20X longer, on average, than the WikiText long-range language modelling benchmark.
Books are partitioned into a train, validation, and test set. Book metadata is stored in metadata.csv which contains (book_id, short_book_title, publication_date).
Unlike prior benchmarks, we do not constrain the vocabulary size --- i.e. mapping rare words to an UNK token --- but instead release the data as an open-vocabulary benchmark. The only processing of the text that has been applied is the removal of boilerplate license text, and the mapping of offensive discriminatory words as specified by Ofcom to placeholder tokens. Users are free to model the data at the character-level, subword-level, or via any mechanism that can model an arbitrary string of text.
To compare models we propose to continue measuring the word-level perplexity, by calculating the total likelihood of the dataset (via any chosen subword vocabulary or character-based scheme) divided by the number of tokens --- specified below in the dataset statistics table.
One could use this dataset for benchmarking long-range language models, or use it to pre-train for other natural language processing tasks which require long-range reasoning, such as LAMBADA or NarrativeQA. We would not recommend using this dataset to train a general-purpose language model, e.g. for applications to a production-system dialogue agent, due to the dated linguistic style of old texts and the inherent biases present in historical writing.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 11.74 GB
- **Size of the generated dataset:** 11.51 GB
- **Total amount of disk used:** 23.25 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"publication_date": 1907,
"short_book_title": "La Fiammetta by Giovanni Boccaccio",
"text": "\"\\n\\n\\n\\nProduced by Ted Garvin, Dave Morgan and PG Distributed Proofreaders\\n\\n\\n\\n\\nLA FIAMMETTA\\n\\nBY\\n\\nGIOVANNI BOCCACCIO\\n...",
"url": "http://www.gutenberg.org/ebooks/10006"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `short_book_title`: a `string` feature.
- `publication_date`: a `int32` feature.
- `url`: a `string` feature.
- `text`: a `string` feature.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default|28602| 50| 100|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The dataset is licensed under [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0.html).
### Citation Information
```
@article{raecompressive2019,
author = {Rae, Jack W and Potapenko, Anna and Jayakumar, Siddhant M and
Hillier, Chloe and Lillicrap, Timothy P},
title = {Compressive Transformers for Long-Range Sequence Modelling},
journal = {arXiv preprint},
url = {https://arxiv.org/abs/1911.05507},
year = {2019},
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@lucidrains](https://github.com/lucidrains), [@lhoestq](https://github.com/lhoestq) for adding this dataset. |
php | ---
annotations_creators:
- found
language_creators:
- found
language:
- cs
- de
- en
- es
- fi
- fr
- he
- hu
- it
- ja
- ko
- nl
- pl
- pt
- ro
- ru
- sk
- sl
- sv
- tr
- tw
- zh
language_bcp47:
- pt-BR
- zh-TW
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: php
dataset_info:
- config_name: fi-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- fi
- nl
splits:
- name: train
num_bytes: 1197502
num_examples: 27870
download_size: 43228
dataset_size: 1197502
- config_name: it-ro
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- it
- ro
splits:
- name: train
num_bytes: 1422966
num_examples: 28507
download_size: 108885
dataset_size: 1422966
- config_name: nl-sv
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- nl
- sv
splits:
- name: train
num_bytes: 1298041
num_examples: 28079
download_size: 58495
dataset_size: 1298041
- config_name: en-it
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- it
splits:
- name: train
num_bytes: 2758463
num_examples: 35538
download_size: 478646
dataset_size: 2758463
- config_name: en-fr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- fr
splits:
- name: train
num_bytes: 4288513
num_examples: 42222
download_size: 905396
dataset_size: 4288513
---
# Dataset Card for php
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/PHP.php
- **Repository:** None
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.
You can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/PHP.php
E.g.
`dataset = load_dataset("php", lang1="it", lang2="pl")`
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
Here are some examples of questions and facts:
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. |
etalab-ia/piaf | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- fr
language_bcp47:
- fr-FR
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
- open-domain-qa
paperswithcode_id: null
pretty_name: Piaf
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
config_name: plain_text
splits:
- name: train
num_bytes: 3332905
num_examples: 3835
download_size: 1370384
dataset_size: 3332905
---
# Dataset Card for Piaf
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://piaf.etalab.studio](https://piaf.etalab.studio)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 1.31 MB
- **Size of the generated dataset:** 3.18 MB
- **Total amount of disk used:** 4.49 MB
### Dataset Summary
Piaf is a reading comprehension dataset. This version, published in February 2020, contains 3835 questions on French Wikipedia.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### plain_text
- **Size of downloaded dataset files:** 1.31 MB
- **Size of the generated dataset:** 3.18 MB
- **Total amount of disk used:** 4.49 MB
An example of 'train' looks as follows.
```
{
"answers": {
"answer_start": [0],
"text": ["Voici"]
},
"context": "Voici le contexte du premier paragraphe du deuxième article.",
"id": "p140295460356960",
"question": "Suis-je la troisième question ?",
"title": "Jakob Böhme"
}
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| name | train |
|------------|------:|
| plain_text | 3835 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{keraron-EtAl:2020:LREC,
author = {Keraron, Rachel and Lancrenon, Guillaume and Bras, Mathilde and Allary, Frédéric and Moyse, Gilles and Scialom, Thomas and Soriano-Morales, Edmundo-Pavel and Staiano, Jacopo},
title = {Project PIAF: Building a Native French Question-Answering Dataset},
booktitle = {Proceedings of The 12th Language Resources and Evaluation Conference},
month = {May},
year = {2020},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {5483--5492},
abstract = {Motivated by the lack of data for non-English languages, in particular for the evaluation of downstream tasks such as Question Answering, we present a participatory effort to collect a native French Question Answering Dataset. Furthermore, we describe and publicly release the annotation tool developed for our collection effort, along with the data obtained and preliminary baselines.},
url = {https://www.aclweb.org/anthology/2020.lrec-1.673}
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova), [@RachelKer](https://github.com/RachelKer) for adding this dataset. |
pib | ---
task_categories:
- translation
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
multilinguality:
- translation
language:
- bn
- en
- gu
- hi
- ml
- mr
- or
- pa
- ta
- te
- ur
language_creators:
- other
annotations_creators:
- no-annotation
source_datasets:
- original
size_categories:
- 100K<n<1M
- 10K<n<100K
license:
- cc-by-4.0
paperswithcode_id: null
pretty_name: CVIT PIB
configs:
- bn-en
- bn-gu
- bn-hi
- bn-ml
- bn-mr
- bn-or
- bn-pa
- bn-ta
- bn-te
- bn-ur
- en-gu
- en-hi
- en-ml
- en-mr
- en-or
- en-pa
- en-ta
- en-te
- en-ur
- gu-hi
- gu-ml
- gu-mr
- gu-or
- gu-pa
- gu-ta
- gu-te
- gu-ur
- hi-ml
- hi-mr
- hi-or
- hi-pa
- hi-ta
- hi-te
- hi-ur
- ml-mr
- ml-or
- ml-pa
- ml-ta
- ml-te
- ml-ur
- mr-or
- mr-pa
- mr-ta
- mr-te
- mr-ur
- or-pa
- or-ta
- or-te
- or-ur
- pa-ta
- pa-te
- pa-ur
- ta-te
- ta-ur
- te-ur
dataset_info:
- config_name: or-ur
features:
- name: translation
dtype:
translation:
languages:
- or
- ur
splits:
- name: train
num_bytes: 27790211
num_examples: 43766
download_size: 393352875
dataset_size: 27790211
- config_name: ml-or
features:
- name: translation
dtype:
translation:
languages:
- ml
- or
splits:
- name: train
num_bytes: 16011549
num_examples: 19413
download_size: 393352875
dataset_size: 16011549
- config_name: bn-ta
features:
- name: translation
dtype:
translation:
languages:
- bn
- ta
splits:
- name: train
num_bytes: 28706668
num_examples: 33005
download_size: 393352875
dataset_size: 28706668
- config_name: gu-mr
features:
- name: translation
dtype:
translation:
languages:
- gu
- mr
splits:
- name: train
num_bytes: 24253770
num_examples: 30766
download_size: 393352875
dataset_size: 24253770
- config_name: hi-or
features:
- name: translation
dtype:
translation:
languages:
- hi
- or
splits:
- name: train
num_bytes: 45086618
num_examples: 61070
download_size: 393352875
dataset_size: 45086618
- config_name: en-or
features:
- name: translation
dtype:
translation:
languages:
- en
- or
splits:
- name: train
num_bytes: 51258494
num_examples: 98230
download_size: 393352875
dataset_size: 51258494
- config_name: mr-ur
features:
- name: translation
dtype:
translation:
languages:
- mr
- ur
splits:
- name: train
num_bytes: 34053295
num_examples: 49691
download_size: 393352875
dataset_size: 34053295
- config_name: en-ta
features:
- name: translation
dtype:
translation:
languages:
- en
- ta
splits:
- name: train
num_bytes: 74931542
num_examples: 118759
download_size: 393352875
dataset_size: 74931542
- config_name: hi-ta
features:
- name: translation
dtype:
translation:
languages:
- hi
- ta
splits:
- name: train
num_bytes: 57628429
num_examples: 64945
download_size: 393352875
dataset_size: 57628429
- config_name: bn-en
features:
- name: translation
dtype:
translation:
languages:
- bn
- en
splits:
- name: train
num_bytes: 53291968
num_examples: 93560
download_size: 393352875
dataset_size: 53291968
- config_name: bn-or
features:
- name: translation
dtype:
translation:
languages:
- bn
- or
splits:
- name: train
num_bytes: 19819136
num_examples: 26456
download_size: 393352875
dataset_size: 19819136
- config_name: ml-ta
features:
- name: translation
dtype:
translation:
languages:
- ml
- ta
splits:
- name: train
num_bytes: 21685938
num_examples: 23609
download_size: 393352875
dataset_size: 21685938
- config_name: gu-ur
features:
- name: translation
dtype:
translation:
languages:
- gu
- ur
splits:
- name: train
num_bytes: 20312414
num_examples: 29938
download_size: 393352875
dataset_size: 20312414
- config_name: bn-ml
features:
- name: translation
dtype:
translation:
languages:
- bn
- ml
splits:
- name: train
num_bytes: 15545271
num_examples: 18149
download_size: 393352875
dataset_size: 15545271
- config_name: ml-pa
features:
- name: translation
dtype:
translation:
languages:
- ml
- pa
splits:
- name: train
num_bytes: 18114904
num_examples: 21978
download_size: 393352875
dataset_size: 18114904
- config_name: en-pa
features:
- name: translation
dtype:
translation:
languages:
- en
- pa
splits:
- name: train
num_bytes: 56316514
num_examples: 103296
download_size: 393352875
dataset_size: 56316514
- config_name: bn-hi
features:
- name: translation
dtype:
translation:
languages:
- bn
- hi
splits:
- name: train
num_bytes: 40970170
num_examples: 49598
download_size: 393352875
dataset_size: 40970170
- config_name: hi-pa
features:
- name: translation
dtype:
translation:
languages:
- hi
- pa
splits:
- name: train
num_bytes: 59293062
num_examples: 75200
download_size: 393352875
dataset_size: 59293062
- config_name: gu-te
features:
- name: translation
dtype:
translation:
languages:
- gu
- te
splits:
- name: train
num_bytes: 14517828
num_examples: 16335
download_size: 393352875
dataset_size: 14517828
- config_name: pa-ta
features:
- name: translation
dtype:
translation:
languages:
- pa
- ta
splits:
- name: train
num_bytes: 39144065
num_examples: 46349
download_size: 393352875
dataset_size: 39144065
- config_name: hi-ml
features:
- name: translation
dtype:
translation:
languages:
- hi
- ml
splits:
- name: train
num_bytes: 24015298
num_examples: 27167
download_size: 393352875
dataset_size: 24015298
- config_name: or-te
features:
- name: translation
dtype:
translation:
languages:
- or
- te
splits:
- name: train
num_bytes: 9011734
num_examples: 10475
download_size: 393352875
dataset_size: 9011734
- config_name: en-ml
features:
- name: translation
dtype:
translation:
languages:
- en
- ml
splits:
- name: train
num_bytes: 27754969
num_examples: 44986
download_size: 393352875
dataset_size: 27754969
- config_name: en-hi
features:
- name: translation
dtype:
translation:
languages:
- en
- hi
splits:
- name: train
num_bytes: 160009440
num_examples: 269594
download_size: 393352875
dataset_size: 160009440
- config_name: bn-pa
features:
- name: translation
dtype:
translation:
languages:
- bn
- pa
splits:
- name: train
num_bytes: 27522373
num_examples: 35109
download_size: 393352875
dataset_size: 27522373
- config_name: mr-te
features:
- name: translation
dtype:
translation:
languages:
- mr
- te
splits:
- name: train
num_bytes: 16838115
num_examples: 18179
download_size: 393352875
dataset_size: 16838115
- config_name: mr-pa
features:
- name: translation
dtype:
translation:
languages:
- mr
- pa
splits:
- name: train
num_bytes: 38720410
num_examples: 50418
download_size: 393352875
dataset_size: 38720410
- config_name: bn-te
features:
- name: translation
dtype:
translation:
languages:
- bn
- te
splits:
- name: train
num_bytes: 15529843
num_examples: 17605
download_size: 393352875
dataset_size: 15529843
- config_name: gu-hi
features:
- name: translation
dtype:
translation:
languages:
- gu
- hi
splits:
- name: train
num_bytes: 33606230
num_examples: 41587
download_size: 393352875
dataset_size: 33606230
- config_name: ta-ur
features:
- name: translation
dtype:
translation:
languages:
- ta
- ur
splits:
- name: train
num_bytes: 37593813
num_examples: 48892
download_size: 393352875
dataset_size: 37593813
- config_name: te-ur
features:
- name: translation
dtype:
translation:
languages:
- te
- ur
splits:
- name: train
num_bytes: 16485209
num_examples: 21148
download_size: 393352875
dataset_size: 16485209
- config_name: or-pa
features:
- name: translation
dtype:
translation:
languages:
- or
- pa
splits:
- name: train
num_bytes: 30081903
num_examples: 43159
download_size: 393352875
dataset_size: 30081903
- config_name: gu-ml
features:
- name: translation
dtype:
translation:
languages:
- gu
- ml
splits:
- name: train
num_bytes: 15749821
num_examples: 18252
download_size: 393352875
dataset_size: 15749821
- config_name: gu-pa
features:
- name: translation
dtype:
translation:
languages:
- gu
- pa
splits:
- name: train
num_bytes: 27441041
num_examples: 35566
download_size: 393352875
dataset_size: 27441041
- config_name: hi-te
features:
- name: translation
dtype:
translation:
languages:
- hi
- te
splits:
- name: train
num_bytes: 26473814
num_examples: 28569
download_size: 393352875
dataset_size: 26473814
- config_name: en-te
features:
- name: translation
dtype:
translation:
languages:
- en
- te
splits:
- name: train
num_bytes: 28620219
num_examples: 44888
download_size: 393352875
dataset_size: 28620219
- config_name: ml-te
features:
- name: translation
dtype:
translation:
languages:
- ml
- te
splits:
- name: train
num_bytes: 9690153
num_examples: 10480
download_size: 393352875
dataset_size: 9690153
- config_name: pa-ur
features:
- name: translation
dtype:
translation:
languages:
- pa
- ur
splits:
- name: train
num_bytes: 34959176
num_examples: 51831
download_size: 393352875
dataset_size: 34959176
- config_name: hi-ur
features:
- name: translation
dtype:
translation:
languages:
- hi
- ur
splits:
- name: train
num_bytes: 81262590
num_examples: 109951
download_size: 393352875
dataset_size: 81262590
- config_name: mr-or
features:
- name: translation
dtype:
translation:
languages:
- mr
- or
splits:
- name: train
num_bytes: 33998805
num_examples: 47001
download_size: 393352875
dataset_size: 33998805
- config_name: en-ur
features:
- name: translation
dtype:
translation:
languages:
- en
- ur
splits:
- name: train
num_bytes: 100571795
num_examples: 202578
download_size: 393352875
dataset_size: 100571795
- config_name: ml-ur
features:
- name: translation
dtype:
translation:
languages:
- ml
- ur
splits:
- name: train
num_bytes: 15663718
num_examples: 20913
download_size: 393352875
dataset_size: 15663718
- config_name: bn-mr
features:
- name: translation
dtype:
translation:
languages:
- bn
- mr
splits:
- name: train
num_bytes: 27604502
num_examples: 34043
download_size: 393352875
dataset_size: 27604502
- config_name: gu-ta
features:
- name: translation
dtype:
translation:
languages:
- gu
- ta
splits:
- name: train
num_bytes: 25089131
num_examples: 29187
download_size: 393352875
dataset_size: 25089131
- config_name: pa-te
features:
- name: translation
dtype:
translation:
languages:
- pa
- te
splits:
- name: train
num_bytes: 23119690
num_examples: 25684
download_size: 393352875
dataset_size: 23119690
- config_name: bn-gu
features:
- name: translation
dtype:
translation:
languages:
- bn
- gu
splits:
- name: train
num_bytes: 19899277
num_examples: 25166
download_size: 393352875
dataset_size: 19899277
- config_name: bn-ur
features:
- name: translation
dtype:
translation:
languages:
- bn
- ur
splits:
- name: train
num_bytes: 27540215
num_examples: 39290
download_size: 393352875
dataset_size: 27540215
- config_name: ml-mr
features:
- name: translation
dtype:
translation:
languages:
- ml
- mr
splits:
- name: train
num_bytes: 19723458
num_examples: 22796
download_size: 393352875
dataset_size: 19723458
- config_name: or-ta
features:
- name: translation
dtype:
translation:
languages:
- or
- ta
splits:
- name: train
num_bytes: 35357904
num_examples: 44035
download_size: 393352875
dataset_size: 35357904
- config_name: ta-te
features:
- name: translation
dtype:
translation:
languages:
- ta
- te
splits:
- name: train
num_bytes: 17415768
num_examples: 17359
download_size: 393352875
dataset_size: 17415768
- config_name: gu-or
features:
- name: translation
dtype:
translation:
languages:
- gu
- or
splits:
- name: train
num_bytes: 20111876
num_examples: 27162
download_size: 393352875
dataset_size: 20111876
- config_name: en-gu
features:
- name: translation
dtype:
translation:
languages:
- en
- gu
splits:
- name: train
num_bytes: 33630906
num_examples: 59739
download_size: 393352875
dataset_size: 33630906
- config_name: hi-mr
features:
- name: translation
dtype:
translation:
languages:
- hi
- mr
splits:
- name: train
num_bytes: 55680473
num_examples: 69186
download_size: 393352875
dataset_size: 55680473
- config_name: mr-ta
features:
- name: translation
dtype:
translation:
languages:
- mr
- ta
splits:
- name: train
num_bytes: 41585343
num_examples: 48535
download_size: 393352875
dataset_size: 41585343
- config_name: en-mr
features:
- name: translation
dtype:
translation:
languages:
- en
- mr
splits:
- name: train
num_bytes: 65042597
num_examples: 117199
download_size: 393352875
dataset_size: 65042597
---
# Dataset Card for CVIT PIB
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://preon.iiit.ac.in/~jerin/bhasha/
- **Paper:** https://arxiv.org/abs/2008.04860
- **Point of Contact:** [Mailing List](cvit-bhasha@googlegroups.com)
### Dataset Summary
This dataset is the large scale sentence aligned corpus in 11 Indian languages, viz. CVIT-PIB corpus that is the largest multilingual corpus available for Indian languages.
### Supported Tasks and Leaderboards
- Machine Translation
### Languages
Parallel data for following languages [en, bn, gu, hi, ml, mr, pa, or, ta, te, ur] are covered.
## Dataset Structure
### Data Instances
An example for the "gu-pa" language pair:
```
{
'translation': {
'gu': 'એવો નિર્ણય લેવાયો હતો કે ખંતપૂર્વકની કામગીરી હાથ ધરવા, કાયદેસર અને ટેકનિકલ મૂલ્યાંકન કરવા, વેન્ચર કેપિટલ ઇન્વેસ્ટમેન્ટ સમિતિની બેઠક યોજવા વગેરે એઆઇએફને કરવામાં આવેલ પ્રતિબદ્ધતાના 0.50 ટકા સુધી અને બાકીની રકમ એફએફએસને પૂર્ણ કરવામાં આવશે.',
'pa': 'ਇਹ ਵੀ ਫੈਸਲਾ ਕੀਤਾ ਗਿਆ ਕਿ ਐੱਫਆਈਆਈ ਅਤੇ ਬਕਾਏ ਲਈ ਕੀਤੀਆਂ ਗਈਆਂ ਵਚਨਬੱਧਤਾਵਾਂ ਦੇ 0.50 % ਦੀ ਸੀਮਾ ਤੱਕ ਐੱਫਈਐੱਸ ਨੂੰ ਮਿਲਿਆ ਜਾਏਗਾ, ਇਸ ਨਾਲ ਉੱਦਮ ਪੂੰਜੀ ਨਿਵੇਸ਼ ਕਮੇਟੀ ਦੀ ਬੈਠਕ ਦਾ ਆਯੋਜਨ ਉਚਿਤ ਸਾਵਧਾਨੀ, ਕਾਨੂੰਨੀ ਅਤੇ ਤਕਨੀਕੀ ਮੁੱਲਾਂਕਣ ਲਈ ਸੰਚਾਲਨ ਖਰਚ ਆਦਿ ਦੀ ਪੂਰਤੀ ਹੋਵੇਗੀ।'
}
}
```
### Data Fields
- `translation`: Translation field containing the parallel text for the pair of languages.
### Data Splits
The dataset is in a single "train" split.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[Creative Commons Attribution-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-sa/4.0/) license.
### Citation Information
```
@inproceedings{siripragada-etal-2020-multilingual,
title = "A Multilingual Parallel Corpora Collection Effort for {I}ndian Languages",
author = "Siripragada, Shashank and
Philip, Jerin and
Namboodiri, Vinay P. and
Jawahar, C V",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.lrec-1.462",
pages = "3743--3751",
language = "English",
ISBN = "979-10-95546-34-4",
}
@article{2020,
title={Revisiting Low Resource Status of Indian Languages in Machine Translation},
url={http://dx.doi.org/10.1145/3430984.3431026},
DOI={10.1145/3430984.3431026},
journal={8th ACM IKDD CODS and 26th COMAD},
publisher={ACM},
author={Philip, Jerin and Siripragada, Shashank and Namboodiri, Vinay P. and Jawahar, C. V.},
year={2020},
month={Dec}
}
```
### Contributions
Thanks to [@vasudevgupta7](https://github.com/vasudevgupta7) for adding this dataset,
and [@albertvillanova](https://github.com/albertvillanova) for updating its version. |
piqa | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
paperswithcode_id: piqa
pretty_name: 'Physical Interaction: Question Answering'
dataset_info:
features:
- name: goal
dtype: string
- name: sol1
dtype: string
- name: sol2
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
config_name: plain_text
splits:
- name: train
num_bytes: 4104026
num_examples: 16113
- name: test
num_bytes: 761521
num_examples: 3084
- name: validation
num_bytes: 464321
num_examples: 1838
download_size: 2638625
dataset_size: 5329868
---
# Dataset Card for "Physical Interaction: Question Answering"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [PIQA homepage](https://yonatanbisk.com/piqa/)
- **Paper:** [PIQA: Reasoning about Physical Commonsense in Natural Language](https://arxiv.org/abs/1911.11641)
- **Leaderboard:** [Official leaderboard](https://yonatanbisk.com/piqa/) *Note that there is a [2nd leaderboard](https://leaderboard.allenai.org/physicaliqa) featuring a different (blind) test set with 3,446 examples as part of the Machine Commonsense DARPA project.*
- **Point of Contact:** [Yonatan Bisk](https://yonatanbisk.com/piqa/)
### Dataset Summary
*To apply eyeshadow without a brush, should I use a cotton swab or a toothpick?*
Questions requiring this kind of physical commonsense pose a challenge to state-of-the-art
natural language understanding systems. The PIQA dataset introduces the task of physical commonsense reasoning
and a corresponding benchmark dataset Physical Interaction: Question Answering or PIQA.
Physical commonsense knowledge is a major challenge on the road to true AI-completeness,
including robots that interact with the world and understand natural language.
PIQA focuses on everyday situations with a preference for atypical solutions.
The dataset is inspired by instructables.com, which provides users with instructions on how to build, craft,
bake, or manipulate objects using everyday materials.
### Supported Tasks and Leaderboards
The underlying task is formualted as multiple choice question answering: given a question `q` and two possible solutions `s1`, `s2`, a model or a human must choose the most appropriate solution, of which exactly one is correct.
### Languages
The text in the dataset is in English. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
An example looks like this:
```
{
"goal": "How do I ready a guinea pig cage for it's new occupants?",
"sol1": "Provide the guinea pig with a cage full of a few inches of bedding made of ripped paper strips, you will also need to supply it with a water bottle and a food dish.",
"sol2": "Provide the guinea pig with a cage full of a few inches of bedding made of ripped jeans material, you will also need to supply it with a water bottle and a food dish.",
"label": 0,
}
```
Note that the test set contains no labels. Predictions need to be submitted to the leaderboard.
### Data Fields
List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.
- `goal`: the question which requires physical commonsense to be answered correctly
- `sol1`: the first solution
- `sol2`: the second solution
- `label`: the correct solution. `0` refers to `sol1` and `1` refers to `sol2`
### Data Splits
The dataset contains 16,000 examples for training, 2,000 for development and 3,000 for testing.
## Dataset Creation
### Curation Rationale
The goal of the dataset is to construct a resource that requires concrete physical reasoning.
### Source Data
The authors provide a prompt to the annotators derived from instructables.com. The instructables website is a crowdsourced collection of instruc- tions for doing everything from cooking to car repair. In most cases, users provide images or videos detailing each step and a list of tools that will be required. Most goals are simultaneously rare and unsurprising. While an annotator is unlikely to have built a UV-Flourescent steampunk lamp or made a backpack out of duct tape, it is not surprising that someone interested in home crafting would create these, nor will the tools and materials be unfamiliar to the average person. Using these examples as the seed for their annotation, helps remind annotators about the less prototypical uses of everyday objects. Second, and equally important, is that instructions build on one another. This means that any QA pair inspired by an instructable is more likely to explicitly state assumptions about what preconditions need to be met to start the task and what postconditions define success.
Annotators were asked to glance at the instructions of an instructable and pull out or have it inspire them to construct two component tasks. They would then articulate the goal (often centered on atypical materials) and how to achieve it. In addition, annotaters were asked to provide a permutation to their own solution which makes it invalid (the negative solution), often subtly.
#### Initial Data Collection and Normalization
During validation, examples with low agreement were removed from the data.
The dataset is further cleaned to remove stylistic artifacts and trivial examples from the data, which have been shown to artificially inflate model performance on previous NLI benchmarks.using the AFLite algorithm introduced in ([Sakaguchi et al. 2020](https://arxiv.org/abs/1907.10641); [Sap et al. 2019](https://arxiv.org/abs/1904.09728)) which is an improvement on adversarial filtering ([Zellers et al, 2018](https://arxiv.org/abs/1808.05326)).
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
Annotations are by construction obtained when crowdsourcers complete the prompt.
#### Who are the annotators?
Paid crowdsourcers
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Unknown
### Citation Information
```
@inproceedings{Bisk2020,
author = {Yonatan Bisk and Rowan Zellers and
Ronan Le Bras and Jianfeng Gao
and Yejin Choi},
title = {PIQA: Reasoning about Physical Commonsense in
Natural Language},
booktitle = {Thirty-Fourth AAAI Conference on
Artificial Intelligence},
year = {2020},
}
```
### Contributions
Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset. |
pn_summary | ---
annotations_creators:
- found
language_creators:
- found
language:
- fa
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
- text-classification
task_ids:
- news-articles-summarization
- news-articles-headline-generation
- text-simplification
- topic-classification
paperswithcode_id: pn-summary
pretty_name: Persian News Summary (PnSummary)
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: article
dtype: string
- name: summary
dtype: string
- name: category
dtype:
class_label:
names:
'0': Economy
'1': Roads-Urban
'2': Banking-Insurance
'3': Agriculture
'4': International
'5': Oil-Energy
'6': Industry
'7': Transportation
'8': Science-Technology
'9': Local
'10': Sports
'11': Politics
'12': Art-Culture
'13': Society
'14': Health
'15': Research
'16': Education-University
'17': Tourism
- name: categories
dtype: string
- name: network
dtype:
class_label:
names:
'0': Tahlilbazaar
'1': Imna
'2': Shana
'3': Mehr
'4': Irna
'5': Khabaronline
- name: link
dtype: string
config_name: 1.0.0
splits:
- name: train
num_bytes: 309436493
num_examples: 82022
- name: validation
num_bytes: 21311817
num_examples: 5592
- name: test
num_bytes: 20936820
num_examples: 5593
download_size: 89591141
dataset_size: 351685130
---
# Dataset Card for Persian News Summary (pn_summary)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/hooshvare/pn-summary/
- **Paper:** https://arxiv.org/abs/2012.11204
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [Mehrdad Farahani](mailto:m3hrdadfphi@gmail.com)
### Dataset Summary
A well-structured summarization dataset for the Persian language consists of 93,207 records. It is prepared for Abstractive/Extractive tasks (like cnn_dailymail for English). It can also be used in other scopes like Text Generation, Title Generation, and News Category Classification.
It is imperative to consider that the newlines were replaced with the `[n]` symbol. Please interpret them into normal newlines (for ex. `t.replace("[n]", "\n")`) and then use them for your purposes.
### Supported Tasks and Leaderboards
The dataset is prepared for Abstractive/Extractive summarization tasks (like cnn_dailymail for English). It can also be used in other scopes like Text Generation, Title Generation, and News Category Classification.
### Languages
The dataset covers Persian mostly and somewhere a combination with English.
## Dataset Structure
### Data Instances
A record consists of 8 features:
```python
record = ['id','title', 'article', 'summary', 'category', 'categories', 'network', 'link']
```
In the following, you can see an example of `pn_summmary`.
```json
{
"article": "به گزارش شانا، علی کاردر امروز (۲۷ دی ماه) در مراسم تودیع محسن قمصری، مدیر سابق امور بین الملل شرکت ملی نفت ایران و معارفه سعید خوشرو، مدیر جدید امور بین الملل این شرکت، گفت: مدیریت امور بین\u200eالملل به عنوان یکی از تاثیرگذارترین مدیریت\u200cهای شرکت ملی نفت ایران در دوران تحریم\u200cهای ظالمانه غرب علیه کشورمان بسیار هوشمندانه عمل کرد و ما توانستیم به خوبی از عهده تحریم\u200cها برآییم. [n] وی افزود: مجموعه امور بین الملل در همه دوران\u200cها با سختی\u200cها و مشکلات بسیاری مواجه بوده است، به ویژه در دوره اخیر به دلیل مسائل پیرامون تحریم وظیفه سنگینی بر عهده داشت که با تدبیر مدیریت خوب این مجموعه سربلند از آن بیرون آمد. [n] کاردر با قدردانی از زحمات محسن قمصری، به سلامت مدیریت امور بین الملل این شرکت اشاره کرد و افزود: محوریت کار مدیریت اموربین الملل سلامت مالی بوده است. [n] وی بر ضرورت نهادینه سازی جوانگرایی در مدیریت شرکت ملی نفت ایران تاکید کرد و گفت: مدیریت امور بین الملل در پرورش نیروهای زبده و کارآزموده آنچنان قوی عملکرده است که برای انتخاب مدیر جدید مشکلی وجود نداشت. [n] کاردر، حرفه\u200eای\u200eگری و کار استاندارد را از ویژگی\u200cهای مدیران این مدیریت برشمرد و گفت: نگاه جامع، خلاقیت و نوآوری و بکارگیری نیروهای جوان باید همچنان مد نظر مدیریت جدید امور بین الملل شرکت ملی نفت ایران باشد.",
"categories": "نفت",
"category": 5,
"id": "738e296491f8b24c5aa63e9829fd249fb4428a66",
"link": "https://www.shana.ir/news/275284/%D9%85%D8%AF%DB%8C%D8%B1%DB%8C%D8%AA-%D9%81%D8%B1%D9%88%D8%B4-%D9%86%D9%81%D8%AA-%D8%AF%D8%B1-%D8%AF%D9%88%D8%B1%D8%A7%D9%86-%D8%AA%D8%AD%D8%B1%DB%8C%D9%85-%D9%87%D9%88%D8%B4%D9%85%D9%86%D8%AF%D8%A7%D9%86%D9%87-%D8%B9%D9%85%D9%84-%DA%A9%D8%B1%D8%AF",
"network": 2,
"summary": "مدیرعامل شرکت ملی نفت، عملکرد مدیریت امور بین\u200eالملل این شرکت را در دوران تحریم بسیار هوشمندانه خواند و گفت: امور بین الملل در دوران پس از تحریم\u200eها نیز می\u200cتواند نقش بزرگی در تسریع روند توسعه داشته باشد.",
"title": "مدیریت فروش نفت در دوران تحریم هوشمندانه عمل کرد"
}
```
### Data Fields
- `id (string)`: ID of the news.
- `title (string)`: The title of the news.
- `article (string)`: The article of the news.
- `summary (string)`: The summary of the news.
- `category (int)`: The category of news in English (index of categories), including `Economy`, `Roads-Urban`, `Banking-Insurance`, `Agriculture`, `International`, `Oil-Energy`, `Industry`, `Transportation`, `Science-Technology`, `Local`, `Sports`, `Politics`, `Art-Culture`, `Society`, `Health`, `Research`, `Education-University`, `Tourism`.
- `categories (string)`: The category and sub-category of the news in Persian.
- `network (int)`: The news agency name (index of news agencies), including `Tahlilbazaar`, `Imna`, `Shana`, `Mehr`, `Irna`, `Khabaronline`.
- `link (string)`: The link of the news.
The category in English includes 18 different article categories from economy to tourism.
```bash
Economy, Roads-Urban, Banking-Insurance, Agriculture, International, Oil-Energy, Industry, Transportation, Science-Technology, Local, Sports, Politics, Art-Culture, Society, Health, Research, Education-University, Tourism
```
### Data Splits
Training (82,022 records, 8 features), validation (5,592 records, 8 features), and test split (5,593 records and 8 features).
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
The dataset comprises numerous articles of various categories that have been crawled from six news agency websites (Tahlilbazaar, Imna, Shana, Mehr, Irna, and Khabaronline).
### Annotations
#### Annotation process
Each record (article) includes the long original text as well as a human-generated summary. The total number of cleaned articles is 93,207 (from 200,000 crawled articles).
#### Who are the annotators?
The dataset was organized by [Mehrdad Farahani](https://github.com/m3hrdadfi), [Mohammad Gharachorloo](https://github.com/baarsaam) and [Mohammad Manthouri](https://github.com/mmanthouri) for this paper [Leveraging ParsBERT and Pretrained mT5 for Persian Abstractive Text Summarization](https://arxiv.org/abs/2012.11204)
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was curated by [Mehrdad Farahani](https://github.com/m3hrdadfi), [Mohammad Gharachorloo](https://github.com/baarsaam) and [Mohammad Manthouri](https://github.com/mmanthouri).
### Licensing Information
This dataset is licensed under MIT License.
### Citation Information
```bibtex
@article{pnSummary,
title={Leveraging ParsBERT and Pretrained mT5 for Persian Abstractive Text Summarization},
author={Mehrdad Farahani, Mohammad Gharachorloo, Mohammad Manthouri},
year={2020},
eprint={2012.11204},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@m3hrdadfi](https://github.com/m3hrdadfi) for adding this dataset. |
poem_sentiment | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: gutenberg-poem-dataset
pretty_name: Gutenberg Poem Dataset
dataset_info:
features:
- name: id
dtype: int32
- name: verse_text
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': positive
'2': no_impact
splits:
- name: train
num_bytes: 48555
num_examples: 892
- name: validation
num_bytes: 5788
num_examples: 105
- name: test
num_bytes: 5588
num_examples: 104
download_size: 49870
dataset_size: 59931
train-eval-index:
- config: default
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
verse_text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for Gutenberg Poem Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** N/A
- **Repository:** [GitHub](https://github.com/google-research-datasets/poem-sentiment)
- **Paper:** [Investigating Societal Biases in a Poetry Composition System](https://arxiv.org/abs/2011.02686)
- **Leaderboard:** N/A
- **Point of Contact:** -
### Dataset Summary
Poem Sentiment is a sentiment dataset of poem verses from Project Gutenberg.
This dataset can be used for tasks such as sentiment classification or style transfer for poems.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text in the dataset is in English (`en`).
## Dataset Structure
### Data Instances
Example of one instance in the dataset.
```{'id': 0, 'label': 2, 'verse_text': 'with pale blue berries. in these peaceful shades--'}```
### Data Fields
- `id`: index of the example
- `verse_text`: The text of the poem verse
- `label`: The sentiment label. Here
- 0 = negative
- 1 = positive
- 2 = no impact
- 3 = mixed (both negative and positive)
> Note: The original dataset uses different label indices (negative = -1, no impact = 0, positive = 1)
### Data Splits
The dataset is split into a `train`, `validation`, and `test` split with the following sizes:
| | train | validation | test |
|--------------------|------:|-----------:|-----:|
| Number of examples | 892 | 105 | 104 |
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
This work is licensed under a Creative Commons Attribution 4.0 International License
### Citation Information
```
@misc{sheng2020investigating,
title={Investigating Societal Biases in a Poetry Composition System},
author={Emily Sheng and David Uthus},
year={2020},
eprint={2011.02686},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset. |
polemo2 | ---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- pl
license:
- bsd-3-clause
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
pretty_name: polemo2
dataset_info:
- config_name: in
features:
- name: sentence
dtype: string
- name: target
dtype:
class_label:
names:
'0': __label__meta_amb
'1': __label__meta_minus_m
'2': __label__meta_plus_m
'3': __label__meta_zero
splits:
- name: train
num_bytes: 4810215
num_examples: 5783
- name: test
num_bytes: 582052
num_examples: 722
- name: validation
num_bytes: 593530
num_examples: 723
download_size: 2350339
dataset_size: 5985797
- config_name: out
features:
- name: sentence
dtype: string
- name: target
dtype:
class_label:
names:
'0': __label__meta_amb
'1': __label__meta_minus_m
'2': __label__meta_plus_m
'3': __label__meta_zero
splits:
- name: train
num_bytes: 4810215
num_examples: 5783
- name: test
num_bytes: 309790
num_examples: 494
- name: validation
num_bytes: 310977
num_examples: 494
download_size: 2139891
dataset_size: 5430982
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
https://clarin-pl.eu/dspace/handle/11321/710
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The PolEmo2.0 is a set of online reviews from medicine and hotels domains. The task is to predict the sentiment of a review. There are two separate test sets, to allow for in-domain (medicine and hotels) as well as out-of-domain (products and university) validation.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Polish
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- sentence: string, the review
- target: sentiment of the sentence class
The same tag system is used in plWordNet Emo for lexical units: [+m] (strong positive), [+s] (weak positive), [-m] (strong negative), [-s] (weak negative), [amb] (ambiguous) and [0] (neutral).
Note that the test set doesn't have targets so -1 is used instead
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC BY-NC-SA 4.0
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abecadel](https://github.com/abecadel) for adding this dataset. |
poleval2019_cyberbullying | ---
annotations_creators:
- found
language_creators:
- found
language:
- pl
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- intent-classification
pretty_name: Poleval 2019 cyberbullying
dataset_info:
- config_name: task01
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 1104322
num_examples: 10041
- name: test
num_bytes: 109681
num_examples: 1000
download_size: 410001
dataset_size: 1214003
- config_name: task02
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
splits:
- name: train
num_bytes: 1104322
num_examples: 10041
- name: test
num_bytes: 109681
num_examples: 1000
download_size: 410147
dataset_size: 1214003
---
# Dataset Card for Poleval 2019 cyberbullying
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://2019.poleval.pl/index.php/tasks/task6
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Task 6-1: Harmful vs non-harmful
In this task, the participants are to distinguish between normal/non-harmful tweets (class: 0) and tweets that contain any kind of harmful
information (class: 1). This includes cyberbullying, hate speech and related phenomena. The data for the task is available now and can be
downloaded from the link provided below.
Task 6-2: Type of harmfulness
In this task, the participants shall distinguish between three classes of tweets: 0 (non-harmful), 1 (cyberbullying), 2 (hate-speech). There
are various definitions of both cyberbullying and hate-speech, some of them even putting those two phenomena in the same group. The specific
conditions on which we based our annotations for both cyberbullying and hate-speech, which have been worked out during ten years of research
will be summarized in an introductory paper for the task, however, the main and definitive condition to distinguish the two is whether the
harmful action is addressed towards a private person(s) (cyberbullying), or a public person/entity/large group (hate-speech).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Polish
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- text: the provided tweet
- label: for task 6-1 the label can be 0 (non-harmful) or 1 (harmful)
for task 6-2 the label can be 0 (non-harmful), 1 (cyberbullying) or 2 (hate-speech)
### Data Splits
Train and Test
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@proceedings{ogr:kob:19:poleval,
editor = {Maciej Ogrodniczuk and Łukasz Kobyliński},
title = {{Proceedings of the PolEval 2019 Workshop}},
year = {2019},
address = {Warsaw, Poland},
publisher = {Institute of Computer Science, Polish Academy of Sciences},
url = {http://2019.poleval.pl/files/poleval2019.pdf},
isbn = "978-83-63159-28-3"}
}
```
### Contributions
Thanks to [@czabo](https://github.com/czabo) for adding this dataset. |
poleval2019_mt | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
- found
language:
- en
- pl
- ru
license:
- unknown
multilinguality:
- translation
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: Poleval2019Mt
dataset_info:
- config_name: ru-pl
features:
- name: translation
dtype:
translation:
languages:
- ru
- pl
splits:
- name: train
num_bytes: 2818015
num_examples: 20001
- name: validation
num_bytes: 415735
num_examples: 3001
- name: test
num_bytes: 266462
num_examples: 2969
download_size: 3355801
dataset_size: 3500212
- config_name: en-pl
features:
- name: translation
dtype:
translation:
languages:
- en
- pl
splits:
- name: train
num_bytes: 13217798
num_examples: 129255
- name: validation
num_bytes: 1209168
num_examples: 10001
- name: test
num_bytes: 562482
num_examples: 9845
download_size: 13851405
dataset_size: 14989448
- config_name: pl-ru
features:
- name: translation
dtype:
translation:
languages:
- pl
- ru
splits:
- name: train
num_bytes: 2818015
num_examples: 20001
- name: validation
num_bytes: 415735
num_examples: 3001
- name: test
num_bytes: 149423
num_examples: 2967
download_size: 3355801
dataset_size: 3383173
- config_name: pl-en
features:
- name: translation
dtype:
translation:
languages:
- pl
- en
splits:
- name: train
num_bytes: 13217798
num_examples: 129255
- name: validation
num_bytes: 1209168
num_examples: 10001
- name: test
num_bytes: 16
num_examples: 1
download_size: 13591306
dataset_size: 14426982
---
# Dataset Card for poleval2019_mt
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** PolEval-2019 competition. http://2019.poleval.pl/
- **Repository:** Links available [in this page](http://2019.poleval.pl/index.php/tasks/task4)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
PolEval is a SemEval-inspired evaluation campaign for natural language processing tools for Polish.
Submitted solutions compete against one another within certain tasks selected by organizers, using available data and are evaluated according to
pre-established procedures. One of the tasks in PolEval-2019 was Machine Translation (Task-4).
The task is to train as good as possible machine translation system, using any technology,with limited textual resources.
The competition will be done for 2 language pairs, more popular English-Polish (into Polish direction) and pair that can be called low resourced
Russian-Polish (in both directions).
Here, Polish-English is also made available to allow for training in both directions. However, the test data is ONLY available for English-Polish
### Supported Tasks and Leaderboards
Supports Machine Translation between Russian to Polish and English to Polish (and vice versa).
### Languages
- Polish (pl)
- Russian (ru)
- English (en)
## Dataset Structure
### Data Instances
As the training data set, a set of bi-lingual corpora aligned at the sentence level has been prepared. The corpora are saved in UTF-8 encoding as plain text, one language per file.
### Data Fields
One example of the translation is as below:
```
{
'translation': {'ru': 'не содержала в себе моделей. Модели это сравнительно новое явление. ',
'pl': 'nie miała w sobie modeli. Modele to względnie nowa dziedzina. Tak więc, jeśli '}
}
```
### Data Splits
The dataset is divided into two splits. All the headlines are scraped from news websites on the internet.
| | train | validation | test |
|-------|-------:|-----------:|-----:|
| ru-pl | 20001 | 3001 | 2969 |
| pl-ru | 20001 | 3001 | 2969 |
| en-pl | 129255 | 1000 | 9845 |
## Dataset Creation
### Curation Rationale
This data was curated as a task for the PolEval-2019. The task is to train as good as possible machine translation system, using any technology, with limited textual resources. The competition will be done for 2 language pairs, more popular English-Polish (into Polish direction) and pair that can be called low resourced Russian-Polish (in both directions).
PolEval is a SemEval-inspired evaluation campaign for natural language processing tools for Polish. Submitted tools compete against one another within certain tasks selected by organizers, using available data and are evaluated according to pre-established procedures.
PolEval 2019-related papers were presented at AI & NLP Workshop Day (Warsaw, May 31, 2019).
The links for the top performing models on various tasks (including the Task-4: Machine Translation) is present in [this](http://2019.poleval.pl/index.php/publication) link
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
The organization details of PolEval is present in this [link](http://2019.poleval.pl/index.php/organizers)
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@proceedings{ogr:kob:19:poleval,
editor = {Maciej Ogrodniczuk and Łukasz Kobyliński},
title = {{Proceedings of the PolEval 2019 Workshop}},
year = {2019},
address = {Warsaw, Poland},
publisher = {Institute of Computer Science, Polish Academy of Sciences},
url = {http://2019.poleval.pl/files/poleval2019.pdf},
isbn = "978-83-63159-28-3"}
}
```
### Contributions
Thanks to [@vrindaprabhu](https://github.com/vrindaprabhu) for adding this dataset. |
polsum | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- pl
license:
- cc-by-3.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
paperswithcode_id: null
pretty_name: Polish Summaries Corpus
dataset_info:
features:
- name: id
dtype: string
- name: date
dtype: string
- name: title
dtype: string
- name: section
dtype: string
- name: authors
dtype: string
- name: body
dtype: string
- name: summaries
sequence:
- name: ratio
dtype: int32
- name: type
dtype: string
- name: author
dtype: string
- name: body
dtype: string
- name: spans
sequence:
- name: start
dtype: int32
- name: end
dtype: int32
- name: span_text
dtype: string
splits:
- name: train
num_bytes: 34787575
num_examples: 569
download_size: 6082812
dataset_size: 34787575
---
# Dataset Card for Polish Summaries Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://zil.ipipan.waw.pl/PolishSummariesCorpus
- **Repository:** http://zil.ipipan.waw.pl/PolishSummariesCorpus
- **Paper:** http://nlp.ipipan.waw.pl/Bib/ogro:kop:14:lrec.pdf
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Mateusz Kopeć](http://zil.ipipan.waw.pl/MateuszKopec)
### Dataset Summary
The Corpus contains a large number of manual summaries of news articles,
with many independently created summaries for a single text. Such approach is supposed to overcome the annotator bias, which is often described as a problem during the evaluation of the summarization algorithms against a single gold standard.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
Polish
## Dataset Structure
### Data Instances
See below an example from the dataset. Detailed descriptions of the fields are provided in the following section.
```
{'authors': 'Krystyna Forowicz',
'body': "ROZMOWA\n\nProf. Krzysztof Ernst, kierownik Zakładu Optyki Instytutu Fizyki Doświadczalnej Uniwersytetu Warszawskiego\n\nLidarowe oczy\n\nRYS. MAREK KONECKI\n\nJutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar? \n\nPROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym.\n\nCzy to kosztowne urządzenie będzie służyło tylko naukowcom?\n\nTego typu lidar jest rzeczywiście drogi, kosztuje około miliona marek niemieckich. Jest to najnowsza generacja tego typu lidarów. DIAL - lidar absorbcji różnicowej jest urządzeniem inteligentnym, to znaczy potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, korzyść naukową - rozwijamy badania nad tym urządzeniem, staramy się m.in. rozszerzyć jego zastosowanie także na inne substancje występujące w atmosferze. I korzyść dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska. Nad lidarem pracują specjaliści od laserów i od komputerów. Współpracujemy z doskonałym laboratorium prof. Ludgera Wöste z Freie Universitat Berlin rozwijającym m.in. problematykę lidarową. Pakiet software'u wzbogacamy o nowe algorytmy, które potrafią lepiej i dokładniej rozszyfrowywać sygnał lidarowy, a w konsekwencji skażenia. Żeby przetworzyć tzw. sygnał lidarowy, czyli to co wraca po rozproszeniu światła do układu, i otrzymać rozsądne dane dotyczące rozkładu koncentracji - trzeba dokonać skomplikowanych operacji. \n\nBadania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej, dzięki której ten lidar u nas zaistniał i dla której w ramach naszych zobowiązań wykonujemy pomiary zanieczyszczeń nad naszą wspólną granicą. Zasadniczy koszt jego budowy pokryła uzyskana od Fundacji dotacja. Część pieniędzy przekazał też Narodowy Fundusz Ochrony Środowiska i Gospodarki Wodnej oraz Komitet Badań Naukowych.\n\nCzy wszystkie zanieczyszczenia będzie można wykryć za pomocą lidaru?\n\nNie ma takiego jednostkowego urządzenia, które by wykrywało i mierzyło wszystkie szkodliwe gazy w atmosferze łącznie z dostarczeniem informacji o ich rozkładzie. Ale np. obecnie prowadzimy badania mające na celu rozszerzenie możliwości lidaru o taką substancję jak fosgen. Tym szkodliwym gazem może być skażone powietrze w miastach, w których zlokalizowane są zakłady chemiczne, np. w Bydgoszczy pewne ilości fosgenu emitują Zakłady Chemiczne Organika- Zachem. \n\nLidar typu DIAL jest oparty na pomiarze absorbcji różnicowej, czyli muszą być zastosowane dwie wiązki laserowe o dwóch różnych długościach fali, z których jedna jest absorbowana, a druga nie jest absorbowana przez substancję, którą chcemy wykryć. Cząsteczki, które wykrywamy mają pasma absorbcji w bliskim nadfiolecie. Możemy np. badać zawartość ozonu w troposferze. Okazuje się bowiem, że o ile brak tego gazu w wysokich warstwach atmosfery powoduje groźny efekt cieplarniany, to jego nadmiar tuż nad Ziemią jest szkodliwy. Groźne są też substancje gazowe, jak np. tlenki azotu, będące następstwem spalin samochodowych. A samochodów przybywa.\n\nCzy stać nas będzie na prowadzenie pomiarów ozonu w miastach? \n\nKoszt jednego dnia kampanii pomiarowej firmy zachodnie szacują na kilka tysięcy DM. Potrzebne są pieniądze na utrzymanie lidaru, na prowadzenie badań. Nasze przedsięwzięcie nie ma charakteru komercyjnego. Koszt pomiarów będzie znacznie niższy. Chcemy np. mierzyć w Warszawie rozkłady koncentracji tlenków azotu, ich ewolucję czasową nad różnymi arteriami miasta. Chcielibyśmy rozwinąć tutaj współpracę z państwowymi i wojewódzkimi służbami ochrony środowiska. Tego typu badania były prowadzone np. w Lyonie. Okazało się, że najwięcej tlenków azotu występuje niekoniecznie tam gdzie są one produkowane, to znaczy nie przy najruchliwszych ulicach, jeśli są one dobrze wentylowane a gromadzą się one w małych uliczkach. Przede wszystkim jednak do końca tego roku zamierzamy zakończyć pomiary skażeń atmosferycznych nad granicą polsko-niemiecką. Koncentrujemy się głównie na Czarnym Trójkącie - obszarze u zbiegu trzech granic: Polski, Niemiec i Czech, do niedawna uważanym za najbardziej zdegradowany region w Europie. Prowadziliśmy pomiary w samym Turowie, gdzie elektrownia Turoszowska jest głównym źródłem emisji. W planie mamy Bogatynię, zagłębie miedziowe. \n\nW Czarnym Trójkącie istnieje wiele stacjonarnych stacji monitoringowych.\n\nNasz lidar ma większe możliwości niż stacje monitoringowe. Mierzy zanieczyszczenia nie tylko lokalnie, ale też ich rozkład w przestrzeni, z wysoką rozdzielczością przestrzenną i na odległość kilku kilometrów. Możemy zatem śledzić ewolucję rozprzestrzeniania się tych zanieczyszczeń, ich kierunek i zmiany spowodowane m.in. warunkami atmosferycznymi. Wyniki naszych pomiarów porównujemy z danymi uzyskanymi ze stacji monitoringowych. \n\nJak wypadł Czarny Trójkąt?\n\nKiedy występowaliśmy o finansowanie tego projektu do Fundacji Współpracy Polsko-Niemieckiej zanieczyszczenie powietrza w Czarnym Trójkącie było dużo większe niż obecnie i wszystko wskazuje na to, że będzie dalej spadać. Obecnie stężenie dwutlenku siarki jest na granicy naszych możliwości pomiarowych. Dla regionu Turoszowskiego to dobra wiadomość i dla stosunków polsko-niemieckich też.\n\nTypów lidarów jest wiele \n\nTen lidar pracuje w obszarze bliskiego nadfioletu i promieniowania widzialnego, które jest wynikiem wykorzystania drugiej lub trzeciej harmonicznej lasera szafirowego, pracującego na granicy czerwieni i podczerwieni. DIAL jest tym typem lidara, który dzisiaj ma zdecydowanie największe wzięcie w ochronie środowiska. Z lidarów korzysta meteorologia. W Stanach Zjednoczonych lidary umieszcza się na satelitach (program NASA). Określają na przestrzeni kilkudziesięciu kilometrów rozkłady temperatury, wilgotności, ciśnienia, a także prędkości wiatru. Wykrywają pojawianie się huraganów, a nawet mogą określać rozmiary oka tajfunu.\n\nIle takich urządzeń jest w Europie?\n\n- W Europie takich lidarów jak nasz jest zaledwie kilka. Większość z nich mierzy ozon, dwutlenek siarki i tlenek azotu. Wykrywanie toluenu i benzenu jest oryginalnym rozwiązaniem. Długość fali dla benzenu jest już na skraju możliwości widmowych. Nasz lidar typu DIAL jest najnowocześniejszym w Polsce. Ponadto jest lidarem ruchomym, zainstalowanym na samochodzie. Ale historia lidarów w naszym kraju jest dłuższa i zaczęła się na początku lat 60. Pierwsze próby prowadzone były w stacji geofizycznej PAN w Belsku, niedługo po skonstruowaniu pierwszego w świecie lasera rubinowego. Potem powstał lidar stacjonarny, również typu DIAL, w Gdańsku, a w Krakowie sodary - urządzenia oparte na falach akustycznych, wygodne np. do pomiarów szybkości wiatru. Lidar umieszczony na samochodzie i zbudowany w latach 80 na Politechnice Poznańskiej w perspektywie miał być lidarem typu DIAL.\n\nFizycy dotychczas nie zajmowali się ochroną środowiska?\n\nTaka specjalizacja powstala na Wydziale Fizyki UW dwa lata temu. Gwarancją sukcesu naszego programu dydaktyczno-badawczego jest udział w nim zakładów należących do Instytutu Fizyki Doświadczalnej UW, Pracowni Przetwarzania Informacji (zdjęć satelitarnych) Instytutu Geofizyki i, co bardzo ważne, współpraca z Freie Universität Berlin. Mamy również na UW Międzywydziałowe Studia Ochrony Środowiska i studentom przekazujemy informacje o lidarze i fizycznych metodach badania środowiska. Nasze działania dydaktyczne bardzo efektywnie wspiera NFOŚ.\n\nRozmawiała Krystyna Forowicz",
'date': '1997-04-21',
'id': '199704210011',
'section': 'Nauka i Technika',
'summaries': {'author': ['I',
'I',
'I',
'C',
'C',
'C',
'K',
'K',
'K',
'G',
'G',
'G',
'J',
'J',
'J'],
'body': ['Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar?PROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym.Czy to kosztowne urządzenie będzie służyło tylko naukowcom? Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, naukową - rozwijamy badania nad tym urządzeniem. I korzyść dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska. Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej, dzięki której ten lidar u nas zaistniał i dla której w ramach naszych zobowiązań wykonujemy pomiary zanieczyszczeń nad naszą wspólną granicą.',
'Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar?PROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym.Czy to kosztowne urządzenie będzie służyło tylko naukowcom? Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, naukową - rozwijamy badania nad tym urządzeniem. I korzyść dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska. Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej, dzięki której ten lidar u nas zaistniał i dla której w ramach naszych zobowiązań wykonujemy pomiary zanieczyszczeń nad naszą wspólną granicą. Czy wszystkie zanieczyszczenia będzie można wykryć za pomocą lidaru?Nie ma takiego jednostkowego urządzenia, które by wykrywało i mierzyło wszystkie szkodliwe gazy w atmosferze łącznie z dostarczeniem informacji o ich rozkładzie. Możemy np. badać zawartość ozonu w troposferze. W Europie takich lidarów jak nasz jest zaledwie kilka. Większość z nich mierzy ozon, dwutlenek siarki i tlenek azotu. Fizycy dotychczas nie zajmowali się ochroną środowiska?Taka specjalizacja powstala na Wydziale Fizyki UW dwa lata temu. Gwarancją sukcesu naszego programu dydaktyczno-badawczego jest udział w nim zakładów należących do Instytutu Fizyki Doświadczalnej UW, Pracowni Przetwarzania Informacji Instytutu Geofizyki i współpraca z Freie Universität Berlin.',
'Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar?PROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym. Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej, dzięki której ten lidar u nas zaistniał.',
'Jutro odbędzie sie pokaz nowego polskiego lidara typu DIAL. lidar Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. DIAL - lidar absorbcji różnicowej jest urządzeniem inteligentnym, to znaczy potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. Z lidara korzyść mamy potrójną: użyteczną, naukową I dydaktyczną. Żeby przetworzyć sygnał lidarowy, czyli to co wraca po rozproszeniu światła do układu, i otrzymać dane dotyczące rozkładu koncentracji - trzeba dokonać skomplikowanych operacji. muszą być zastosowane dwie wiązki laserowe o dwóch różnych długościach fali, z których jedna jest absorbowana, a druga nie jest absorbowana przez substancję, którą chcemy wykryć.',
'Jutro odbędzie sie pokaz nowego polskiego lidara typu DIAL. lidar Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym. Jest to najnowsza generacja tego typu lidarów. DIAL - lidar absorbcji różnicowej jest urządzeniem inteligentnym, to znaczy potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, korzyść naukową - rozwijamy badania nad tym urządzeniem. I korzyść dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska. Żeby przetworzyć tzw. sygnał lidarowy, czyli to co wraca po rozproszeniu światła do układu, i otrzymać rozsądne dane dotyczące rozkładu koncentracji - trzeba dokonać skomplikowanych operacji. Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej, dzięki której ten lidar u nas zaistniał i dla której w ramach naszych zobowiązań wykonujemy pomiary zanieczyszczeń nad naszą wspólną granicą. Zasadniczy koszt jego budowy pokryła uzyskana od Fundacji dotacja. Część pieniędzy przekazał też Narodowy Fundusz Ochrony Środowiska i Gospodarki Wodnej oraz Komitet Badań Naukowych. Lidar typu DIAL jest oparty na pomiarze absorbcji różnicowej, czyli muszą być zastosowane dwie wiązki laserowe o dwóch różnych długościach fali, z których jedna jest absorbowana, a druga nie jest absorbowana przez substancję, którą chcemy wykryć.',
'Jutro odbędzie sie pokaz nowego polskiego lidara typu DIAL. lidar Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. DIAL - lidar absorbcji różnicowej jest urządzeniem inteligentnym, to znaczy potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. Z lidara korzyść mamy potrójną: użyteczną, naukową I dydaktyczną.',
'Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar? \nPROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym. DIAL - lidar absorbcji różnicowej jest urządzeniem inteligentnym, to znaczy potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, korzyść naukową - rozwijamy badania nad tym urządzeniem I korzyść dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.Nasz lidar ma większe możliwości niż stacje monitoringowe. Mierzy zanieczyszczenia nie tylko lokalnie, ale też ich rozkład w przestrzeni, z wysoką rozdzielczością przestrzenną i na odległość kilku kilometrów.',
'Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar? \nPROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym.Tego typu lidar jest drogi, kosztuje około miliona marek niemieckich. DIAL - lidar absorbcji różnicowej jest urządzeniem inteligentnym, to znaczy potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, korzyść naukową - rozwijamy badania nad tym urządzeniem, staramy się m.in. rozszerzyć jego zastosowanie także na inne substancje występujące w atmosferze. I korzyść dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.Lidar typu DIAL jest oparty na pomiarze absorbcji różnicowej, czyli muszą być zastosowane dwie wiązki laserowe o dwóch różnych długościach fali, z których jedna jest absorbowana, a druga nie jest absorbowana przez substancję, którą chcemy wykryć. Cząsteczki, które wykrywamy mają pasma absorbcji w bliskim nadfiolecie.Nasz lidar ma większe możliwości niż stacje monitoringowe. Mierzy zanieczyszczenia nie tylko lokalnie, ale też ich rozkład w przestrzeni, z wysoką rozdzielczością przestrzenną i na odległość kilku kilometrów. Możemy zatem śledzić ewolucję rozprzestrzeniania się tych zanieczyszczeń, ich kierunek i zmiany spowodowane m.in. warunkami atmosferycznymi. Wyniki naszych pomiarów porównujemy z danymi uzyskanymi ze stacji monitoringowych.',
'Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar? \nPROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, korzyść naukową i dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.',
'Jutro odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar? \n\nPROF. KRZYSZTOF ERNST: urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.\nto najnowsza generacja tego typu lidarów. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. korzyść mamy potrójną: użyteczną, przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, naukową - rozwijamy badania nad urządzeniem I dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.\nNasze przedsięwzięcie nie ma charakteru komercyjnego. Chcemy np. mierzyć w Warszawie rozkłady koncentracji tlenków azotu. Koncentrujemy się głównie na Czarnym Trójkącie - obszarze u zbiegu granic: Polski, Niemiec i Czech, do niedawna uważanym za najbardziej zdegradowany region w Europie.',
'Jutro odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar? \n\nPROF. KRZYSZTOF ERNST: urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.\n\nto kosztowne urządzenie będzie służyło tylko naukowcom?\n\nlidar jest rzeczywiście drogi. to najnowsza generacja tego typu lidarów. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. korzyść mamy potrójną: użyteczną, przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, naukową - rozwijamy badania nad tym urządzeniem I dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.\n\nCzy wszystkie zanieczyszczenia będzie można wykryć za pomocą lidaru?\n\nNie ma takiego jednostkowego urządzenia, które by wykrywało i mierzyło wszystkie szkodliwe gazy w atmosferze. Ale prowadzimy badania mające na celu rozszerzenie możliwości lidaru o taką substancję jak fosgen.\n\nstać nas będzie na prowadzenie pomiarów ozonu w miastach? \n\nNasze przedsięwzięcie nie ma charakteru komercyjnego. Chcemy np. mierzyć w Warszawie rozkłady koncentracji tlenków azotu, ich ewolucję czasową nad różnymi arteriami miasta. Koncentrujemy się głównie na Czarnym Trójkącie - obszarze u zbiegu granic: Polski, Niemiec i Czech, do niedawna uważanym za najbardziej zdegradowany region w Europie. zanieczyszczenie było dużo większe niż obecnie i wszystko wskazuje na to, że będzie dalej spadać.\nDIAL dzisiaj ma zdecydowanie największe wzięcie w ochronie środowiska. \n\nFizycy dotychczas nie zajmowali się ochroną środowiska?\n\nTaka specjalizacja powstala na Wydziale Fizyki UW dwa lata temu.',
'Co to jest lidar? \n\nPROF. KRZYSZTOF ERNST: urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.\nto najnowsza generacja tego typu lidarów. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. korzyść mamy potrójną: użyteczną, wykonujemy pomiary skażeń atmosferycznych, naukową - rozwijamy badania nad urządzeniem I dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.',
'Co to jest lidar? \nPROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. staramy się rozszerzyć jego zastosowanie na inne substancje występujące w atmosferze. Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej. zamierzamy zakończyć pomiary skażeń atmosferycznych nad granicą polsko-niemiecką. Nasz lidar ma większe możliwości niż stacje monitoringowe. Możemy śledzić ewolucję rozprzestrzeniania się zanieczyszczeń, ich kierunek i zmiany. Gwarancją sukcesu naszego programu dydaktyczno-badawczego jest udział w nim zakładów należących do Instytutu Fizyki Doświadczalnej UW, Pracowni Przetwarzania Informacji Instytutu Geofizyki i współpraca z Freie Universität Berlin.',
"Co to jest lidar? \nPROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. DIAL - lidar absorbcji różnicowej potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. staramy się rozszerzyć jego zastosowanie także na inne substancje występujące w atmosferze. Pakiet software'u wzbogacamy o nowe algorytmy, które potrafią dokładniej rozszyfrowywać sygnał lidarowy, a w konsekwencji skażenia. Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej. \n\nChcemy mierzyć w Warszawie rozkłady koncentracji tlenków azotu, ich ewolucję czasową nad różnymi arteriami miasta. zamierzamy zakończyć pomiary skażeń atmosferycznych nad granicą polsko-niemiecką. Nasz lidar ma większe możliwości niż stacje monitoringowe. Możemy śledzić ewolucję rozprzestrzeniania się zanieczyszczeń, ich kierunek i zmiany spowodowane m.in. warunkami atmosferycznymi. \n\nDIAL jest tym typem lidara, który dzisiaj ma największe wzięcie w ochronie środowiska. Z lidarów korzysta meteorologia. W Europie takich lidarów jak nasz jest zaledwie kilka. Nasz lidar jest najnowocześniejszym w Polsce. Ponadto jest lidarem ruchomym, zainstalowanym na samochodzie. \n\nFizycy dotychczas nie zajmowali się ochroną środowiska?\nTaka specjalizacja powstala na Wydziale Fizyki UW dwa lata temu. Gwarancją sukcesu naszego programu dydaktyczno-badawczego jest udział w nim zakładów należących do Instytutu Fizyki Doświadczalnej UW, Pracowni Przetwarzania Informacji Instytutu Geofizyki i współpraca z Freie Universität Berlin.",
'Co to jest lidar? \nPROF. KRZYSZTOF ERNST: to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. zamierzamy zakończyć pomiary skażeń atmosferycznych nad granicą polsko-niemiecką. Nasz lidar ma większe możliwości niż stacje monitoringowe. Możemy śledzić ewolucję rozprzestrzeniania się zanieczyszczeń, ich kierunek i zmiany.'],
'ratio': [10, 20, 5, 10, 20, 5, 10, 20, 5, 10, 20, 5, 10, 20, 5],
'spans': [{'end': [244, 396, 457, 867, 922, 1022, 1103, 1877],
'span_text': ['Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar?',
'PROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym.',
'Czy to kosztowne urządzenie będzie służyło tylko naukowcom?',
'Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych,',
'naukową - rozwijamy badania nad tym urządzeniem',
'.',
'I korzyść dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.',
'Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej, dzięki której ten lidar u nas zaistniał i dla której w ramach naszych zobowiązań wykonujemy pomiary zanieczyszczeń nad naszą wspólną granicą.'],
'start': [153, 247, 398, 760, 875, 1020, 1023, 1631]},
{'end': [244,
396,
457,
867,
922,
1022,
1103,
1878,
2132,
2296,
2969,
6225,
6985,
7047,
7282,
7326,
7383],
'span_text': ['Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar?',
'PROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym.',
'Czy to kosztowne urządzenie będzie służyło tylko naukowcom?',
'Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych,',
'naukową - rozwijamy badania nad tym urządzeniem',
'.',
'I korzyść dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.',
'Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej, dzięki której ten lidar u nas zaistniał i dla której w ramach naszych zobowiązań wykonujemy pomiary zanieczyszczeń nad naszą wspólną granicą.',
'Czy wszystkie zanieczyszczenia będzie można wykryć za pomocą lidaru?',
'Nie ma takiego jednostkowego urządzenia, które by wykrywało i mierzyło wszystkie szkodliwe gazy w atmosferze łącznie z dostarczeniem informacji o ich rozkładzie.',
'Możemy np. badać zawartość ozonu w troposferze.',
'W Europie takich lidarów jak nasz jest zaledwie kilka. Większość z nich mierzy ozon, dwutlenek siarki i tlenek azotu.',
'',
'Fizycy dotychczas nie zajmowali się ochroną środowiska?',
'Taka specjalizacja powstala na Wydziale Fizyki UW dwa lata temu. Gwarancją sukcesu naszego programu dydaktyczno-badawczego jest udział w nim zakładów należących do Instytutu Fizyki Doświadczalnej UW, Pracowni Przetwarzania Informacji',
'Instytutu Geofizyki i',
'współpraca z Freie Universität Berlin.'],
'start': [153,
247,
398,
760,
875,
1020,
1023,
1631,
2064,
2134,
2921,
6108,
6984,
6992,
7049,
7304,
7344]},
{'end': [244, 396, 1103, 1774, 1877],
'span_text': ['Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar?',
'PROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym.',
'',
'Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej, dzięki której ten lidar u nas zaistniał',
'.'],
'start': [153, 247, 1102, 1631, 1876]},
{'end': [159,
227,
243,
360,
804,
882,
1025,
1044,
1103,
1454,
1540,
1629,
2848],
'span_text': ['Jutro',
'odbędzie sie pokaz nowego polskiego lidara typu DIAL.',
'lidar',
'Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.',
'DIAL - lidar absorbcji różnicowej jest urządzeniem inteligentnym, to znaczy potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. Z lidara korzyść mamy potrójną: użyteczną,',
'naukową',
'I',
'dydaktyczną',
'.',
'Żeby przetworzyć',
'sygnał lidarowy, czyli to co wraca po rozproszeniu światła do układu, i otrzymać',
'dane dotyczące rozkładu koncentracji - trzeba dokonać skomplikowanych operacji.',
'muszą być zastosowane dwie wiązki laserowe o dwóch różnych długościach fali, z których jedna jest absorbowana, a druga nie jest absorbowana przez substancję, którą chcemy wykryć.'],
'start': [153,
173,
238,
270,
591,
875,
1022,
1033,
1101,
1437,
1459,
1549,
2670]},
{'end': [159, 227, 243, 396, 922, 1103, 1629, 2062, 2582, 2848],
'span_text': ['Jutro',
'odbędzie sie pokaz nowego polskiego lidara typu DIAL.',
'lidar',
'Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym.',
'Jest to najnowsza generacja tego typu lidarów. DIAL - lidar absorbcji różnicowej jest urządzeniem inteligentnym, to znaczy potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, korzyść naukową - rozwijamy badania nad tym urządzeniem',
'. I korzyść dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.',
'Żeby przetworzyć tzw. sygnał lidarowy, czyli to co wraca po rozproszeniu światła do układu, i otrzymać rozsądne dane dotyczące rozkładu koncentracji - trzeba dokonać skomplikowanych operacji.',
'Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej, dzięki której ten lidar u nas zaistniał i dla której w ramach naszych zobowiązań wykonujemy pomiary zanieczyszczeń nad naszą wspólną granicą. Zasadniczy koszt jego budowy pokryła uzyskana od Fundacji dotacja. Część pieniędzy przekazał też Narodowy Fundusz Ochrony Środowiska i Gospodarki Wodnej oraz Komitet Badań Naukowych.',
'',
'Lidar typu DIAL jest oparty na pomiarze absorbcji różnicowej, czyli muszą być zastosowane dwie wiązki laserowe o dwóch różnych długościach fali, z których jedna jest absorbowana, a druga nie jest absorbowana przez substancję, którą chcemy wykryć.'],
'start': [153, 173, 238, 270, 542, 1020, 1437, 1631, 2581, 2602]},
{'end': [159, 227, 243, 360, 804, 882, 1025, 1044, 1102],
'span_text': ['Jutro',
'odbędzie sie pokaz nowego polskiego lidara typu DIAL.',
'lidar',
'Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.',
'DIAL - lidar absorbcji różnicowej jest urządzeniem inteligentnym, to znaczy potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. Z lidara korzyść mamy potrójną: użyteczną,',
'naukową',
'I',
'dydaktyczną',
'.'],
'start': [153, 173, 238, 270, 591, 875, 1022, 1033, 1101]},
{'end': [246, 396, 922, 1102, 4763],
'span_text': ['Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar?',
'PROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym.',
'DIAL - lidar absorbcji różnicowej jest urządzeniem inteligentnym, to znaczy potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, korzyść naukową - rozwijamy badania nad tym urządzeniem',
'I korzyść dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.',
'Nasz lidar ma większe możliwości niż stacje monitoringowe. Mierzy zanieczyszczenia nie tylko lokalnie, ale też ich rozkład w przestrzeni, z wysoką rozdzielczością przestrzenną i na odległość kilku kilometrów.'],
'start': [153, 247, 590, 1022, 4555]},
{'end': [246, 396, 480, 542, 1021, 1102, 2920, 4989],
'span_text': ['Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar?',
'PROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi. Nazywane też jest radarem laserowym.',
'Tego typu lidar jest',
'drogi, kosztuje około miliona marek niemieckich.',
'DIAL - lidar absorbcji różnicowej jest urządzeniem inteligentnym, to znaczy potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen. Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, korzyść naukową - rozwijamy badania nad tym urządzeniem, staramy się m.in. rozszerzyć jego zastosowanie także na inne substancje występujące w atmosferze.',
'I korzyść dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.',
'Lidar typu DIAL jest oparty na pomiarze absorbcji różnicowej, czyli muszą być zastosowane dwie wiązki laserowe o dwóch różnych długościach fali, z których jedna jest absorbowana, a druga nie jest absorbowana przez substancję, którą chcemy wykryć. Cząsteczki, które wykrywamy mają pasma absorbcji w bliskim nadfiolecie.',
'Nasz lidar ma większe możliwości niż stacje monitoringowe. Mierzy zanieczyszczenia nie tylko lokalnie, ale też ich rozkład w przestrzeni, z wysoką rozdzielczością przestrzenną i na odległość kilku kilometrów. Możemy zatem śledzić ewolucję rozprzestrzeniania się tych zanieczyszczeń, ich kierunek i zmiany spowodowane m.in. warunkami atmosferycznymi. Wyniki naszych pomiarów porównujemy z danymi uzyskanymi ze stacji monitoringowych.'],
'start': [153, 247, 459, 493, 590, 1022, 2602, 4555]},
{'end': [246, 360, 626, 883, 920, 1102],
'span_text': ['Jutro w Instytucie odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar?',
'PROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.',
'',
'Z lidara korzyść mamy potrójną: użyteczną, bo przy jego pomocy wykonujemy pomiary skażeń atmosferycznych, korzyść naukową',
'i',
'dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.'],
'start': [153, 247, 625, 760, 919, 1032]},
{'end': [158,
262,
271,
359,
397,
590,
761,
803,
867,
907,
922,
1025,
1102,
3311,
3516,
3595,
3623,
3675,
4226,
4332],
'span_text': ['Jutro',
'odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar? \n\nPROF. KRZYSZTOF',
'ERNST:',
'urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.',
'',
'to najnowsza generacja tego typu lidarów.',
'Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen.',
'korzyść mamy potrójną: użyteczną,',
'przy jego pomocy wykonujemy pomiary skażeń atmosferycznych,',
'naukową - rozwijamy badania nad',
'urządzeniem',
'I',
'dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.',
'',
'Nasze przedsięwzięcie nie ma charakteru komercyjnego.',
'Chcemy np. mierzyć w Warszawie rozkłady',
'koncentracji tlenków azotu',
'.',
'Koncentrujemy się głównie na Czarnym Trójkącie - obszarze u zbiegu',
'granic: Polski, Niemiec i Czech, do niedawna uważanym za najbardziej zdegradowany region w Europie.'],
'start': [153,
172,
263,
279,
396,
548,
699,
769,
806,
875,
911,
1022,
1033,
3310,
3462,
3556,
3596,
3674,
4158,
4233]},
{'end': [158,
262,
271,
359,
398,
459,
498,
543,
590,
761,
803,
867,
922,
1025,
1102,
2242,
2300,
2406,
3247,
3311,
3516,
3595,
3675,
4226,
4333,
5130,
5241,
5439,
5661,
5756,
7113],
'span_text': ['Jutro',
'odbędzie sie pokaz nowego polskiego lidara typu DIAL. Co to jest lidar? \n\nPROF. KRZYSZTOF',
'ERNST:',
'urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.',
'',
'to kosztowne urządzenie będzie służyło tylko naukowcom?',
'lidar jest rzeczywiście drogi',
'.',
'to najnowsza generacja tego typu lidarów.',
'Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen.',
'korzyść mamy potrójną: użyteczną,',
'przy jego pomocy wykonujemy pomiary skażeń atmosferycznych,',
'naukową - rozwijamy badania nad tym urządzeniem',
'I',
'dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.',
'Czy wszystkie zanieczyszczenia będzie można wykryć za pomocą lidaru?\n\nNie ma takiego jednostkowego urządzenia, które by wykrywało i mierzyło wszystkie szkodliwe gazy w atmosferze',
'. Ale',
'prowadzimy badania mające na celu rozszerzenie możliwości lidaru o taką substancję jak fosgen.',
'',
'stać nas będzie na prowadzenie pomiarów ozonu w miastach?',
'Nasze przedsięwzięcie nie ma charakteru komercyjnego.',
'Chcemy np. mierzyć w Warszawie rozkłady',
'koncentracji tlenków azotu, ich ewolucję czasową nad różnymi arteriami miasta.',
'Koncentrujemy się głównie na Czarnym Trójkącie - obszarze u zbiegu',
'granic: Polski, Niemiec i Czech, do niedawna uważanym za najbardziej zdegradowany region w Europie.',
'zanieczyszczenie',
'było dużo większe niż obecnie i wszystko wskazuje na to, że będzie dalej spadać.',
'',
'DIAL',
'dzisiaj ma zdecydowanie największe wzięcie w ochronie środowiska.',
'Fizycy dotychczas nie zajmowali się ochroną środowiska?\n\nTaka specjalizacja powstala na Wydziale Fizyki UW dwa lata temu.'],
'start': [153,
172,
263,
279,
396,
402,
469,
541,
548,
699,
769,
806,
875,
1022,
1033,
2062,
2294,
2312,
3245,
3251,
3462,
3556,
3596,
4158,
4233,
5114,
5160,
5438,
5656,
5690,
6990]},
{'end': [262, 271, 359, 397, 590, 761, 803, 807, 867, 907, 922, 1025, 1102],
'span_text': ['Co to jest lidar? \n\nPROF. KRZYSZTOF',
'ERNST:',
'urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.',
'',
'to najnowsza generacja tego typu lidarów.',
'Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen.',
'korzyść mamy potrójną: użyteczną,',
'',
'wykonujemy pomiary skażeń atmosferycznych,',
'naukową - rozwijamy badania nad',
'urządzeniem',
'I',
'dydaktyczną - szkolimy studentów zainteresowanych ochroną środowiska.'],
'start': [227,
263,
279,
396,
548,
699,
769,
806,
824,
875,
911,
1022,
1033]},
{'end': [245,
360,
761,
936,
971,
1022,
1733,
1878,
4159,
4614,
4772,
4818,
4860,
4906,
7283,
7326,
7383],
'span_text': ['Co to jest lidar?',
'PROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.',
'Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen.',
'staramy się',
'rozszerzyć jego zastosowanie',
'na inne substancje występujące w atmosferze.',
'Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej',
'.',
'zamierzamy zakończyć pomiary skażeń atmosferycznych nad granicą polsko-niemiecką.',
'Nasz lidar ma większe możliwości niż stacje monitoringowe.',
'Możemy',
'śledzić ewolucję rozprzestrzeniania się',
'zanieczyszczeń, ich kierunek i zmiany',
'.',
'Gwarancją sukcesu naszego programu dydaktyczno-badawczego jest udział w nim zakładów należących do Instytutu Fizyki Doświadczalnej UW, Pracowni Przetwarzania Informacji',
'Instytutu Geofizyki i',
'współpraca z Freie Universität Berlin.'],
'start': [227,
246,
699,
924,
942,
977,
1631,
1876,
4076,
4555,
4765,
4778,
4823,
4904,
7114,
7305,
7344]},
{'end': [245,
360,
625,
761,
936,
1022,
1311,
1357,
1436,
1733,
1878,
3247,
3311,
3563,
3676,
4159,
4614,
4772,
4818,
4906,
5410,
5439,
5701,
5789,
6163,
6364,
6472,
7048,
7283,
7326,
7383],
'span_text': ['Co to jest lidar?',
'PROF. KRZYSZTOF ERNST: Jest to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.',
'DIAL - lidar absorbcji różnicowej',
'potrafi rozróżnić, co mierzy. Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen.',
'staramy się',
'rozszerzyć jego zastosowanie także na inne substancje występujące w atmosferze.',
"Pakiet software'u",
'wzbogacamy o nowe algorytmy, które potrafią',
'dokładniej rozszyfrowywać sygnał lidarowy, a w konsekwencji skażenia.',
'Badania, które prowadzimy, są zainicjowane i finansowane przez Fundację Współpracy Polsko-Niemieckiej',
'.',
'',
'',
'Chcemy',
'mierzyć w Warszawie rozkłady koncentracji tlenków azotu, ich ewolucję czasową nad różnymi arteriami miasta.',
'zamierzamy zakończyć pomiary skażeń atmosferycznych nad granicą polsko-niemiecką.',
'Nasz lidar ma większe możliwości niż stacje monitoringowe.',
'Możemy',
'śledzić ewolucję rozprzestrzeniania się',
'zanieczyszczeń, ich kierunek i zmiany spowodowane m.in. warunkami atmosferycznymi.',
'',
'',
'DIAL jest tym typem lidara, który dzisiaj ma',
'największe wzięcie w ochronie środowiska. Z lidarów korzysta meteorologia.',
'W Europie takich lidarów jak nasz jest zaledwie kilka.',
'Nasz lidar',
'jest najnowocześniejszym w Polsce. Ponadto jest lidarem ruchomym, zainstalowanym na samochodzie.',
'Fizycy dotychczas nie zajmowali się ochroną środowiska?',
'Taka specjalizacja powstala na Wydziale Fizyki UW dwa lata temu. Gwarancją sukcesu naszego programu dydaktyczno-badawczego jest udział w nim zakładów należących do Instytutu Fizyki Doświadczalnej UW, Pracowni Przetwarzania Informacji',
'Instytutu Geofizyki i',
'współpraca z Freie Universität Berlin.'],
'start': [227,
246,
591,
668,
924,
942,
1293,
1313,
1366,
1631,
1876,
3246,
3310,
3556,
3567,
4076,
4555,
4765,
4778,
4823,
5409,
5438,
5656,
5714,
6108,
6353,
6374,
6990,
7049,
7305,
7344]},
{'end': [245, 271, 360, 761, 4159, 4614, 4772, 4818, 4860, 4905],
'span_text': ['Co to jest lidar?',
'PROF. KRZYSZTOF ERNST:',
'to urządzenie pozwalające wyznaczać zanieczyszczenia atmosfery metodami optycznymi.',
'Wykrywa ozon, dwutlenek siarki, tlenki azotu, benzen, toluen.',
'zamierzamy zakończyć pomiary skażeń atmosferycznych nad granicą polsko-niemiecką.',
'Nasz lidar ma większe możliwości niż stacje monitoringowe.',
'Możemy',
'śledzić ewolucję rozprzestrzeniania się',
'zanieczyszczeń, ich kierunek i zmiany',
'.'],
'start': [227, 246, 276, 699, 4076, 4555, 4765, 4778, 4823, 4904]}],
'type': ['extract',
'extract',
'extract',
'extract',
'extract',
'extract',
'extract',
'extract',
'extract',
'extract',
'extract',
'extract',
'extract',
'extract',
'extract']},
'title': 'Lidarowe oczy'}
```
### Data Fields
- `id`: a `string` example identifier
- `date`: date of the original article (`string`)
- `title`: title of the original article (`string`)
- `section`: the section of the newspaper the original article belonged to (`string`)
- `authors`: original article authors (`string`)
- `body`: original article body (list of `string`s)
- `summaries`: a dictionary feature containing summaries of the original article with the following attributes:
- `ratio`: ratio of summary - percentage of the original article (list of `int32`s)
- `type`: type of summary - extractive (`extract`) or abstractive (`abstract`) (list of `string`s)
- `author`: acronym of summary author (list of `string`)
- `body`: body of summary (list of `string`)
- `spans`: a list containing spans for extractive summaries (empty for abstractive summaries):
- `start`: start of span (`int32`)
- `end`: end of span (`int32`)
- `span_text`: span text (`string`)
### Data Splits
Single train split
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
```
@inproceedings{
ogro:kop:14:lrec,
author = "Ogrodniczuk, Maciej and Kopeć, Mateusz",
pdf = "http://nlp.ipipan.waw.pl/Bib/ogro:kop:14:lrec.pdf",
title = "The {P}olish {S}ummaries {C}orpus",
pages = "3712--3715",
crossref = "lrec:14"
}
@proceedings{
lrec:14,
editor = "Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios",
isbn = "978-2-9517408-8-4",
title = "Proceedings of the Ninth International {C}onference on {L}anguage {R}esources and {E}valuation, {LREC}~2014",
url = "http://www.lrec-conf.org/proceedings/lrec2014/index.html",
booktitle = "Proceedings of the Ninth International {C}onference on {L}anguage {R}esources and {E}valuation, {LREC}~2014",
address = "Reykjavík, Iceland",
key = "LREC",
year = "2014",
organization = "European Language Resources Association (ELRA)"
}
```
### Contributions
Thanks to [@kldarek](https://github.com/kldarek) for adding this dataset. |
polyglot_ner | ---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- ar
- bg
- ca
- cs
- da
- de
- el
- en
- es
- et
- fa
- fi
- fr
- he
- hi
- hr
- hu
- id
- it
- ja
- ko
- lt
- lv
- ms
- nl
- 'no'
- pl
- pt
- ro
- ru
- sk
- sl
- sr
- sv
- th
- tl
- tr
- uk
- vi
- zh
license:
- unknown
multilinguality:
- multilingual
pretty_name: Polyglot-NER
size_categories:
- unknown
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: polyglot-ner
dataset_info:
- config_name: ca
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 143746026
num_examples: 372665
download_size: 1107018606
dataset_size: 143746026
- config_name: de
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 156744752
num_examples: 547578
download_size: 1107018606
dataset_size: 156744752
- config_name: es
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 145387551
num_examples: 386699
download_size: 1107018606
dataset_size: 145387551
- config_name: fi
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 95175890
num_examples: 387465
download_size: 1107018606
dataset_size: 95175890
- config_name: hi
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 177698330
num_examples: 401648
download_size: 1107018606
dataset_size: 177698330
- config_name: id
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 152560050
num_examples: 463862
download_size: 1107018606
dataset_size: 152560050
- config_name: ko
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 174523416
num_examples: 560105
download_size: 1107018606
dataset_size: 174523416
- config_name: ms
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 155268778
num_examples: 528181
download_size: 1107018606
dataset_size: 155268778
- config_name: pl
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 159684112
num_examples: 623267
download_size: 1107018606
dataset_size: 159684112
- config_name: ru
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 200717423
num_examples: 551770
download_size: 1107018606
dataset_size: 200717423
- config_name: sr
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 183437513
num_examples: 559423
download_size: 1107018606
dataset_size: 183437513
- config_name: tl
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 47104871
num_examples: 160750
download_size: 1107018606
dataset_size: 47104871
- config_name: vi
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 141062258
num_examples: 351643
download_size: 1107018606
dataset_size: 141062258
- config_name: ar
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 183551222
num_examples: 339109
download_size: 1107018606
dataset_size: 183551222
- config_name: cs
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 156792129
num_examples: 564462
download_size: 1107018606
dataset_size: 156792129
- config_name: el
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 195456401
num_examples: 446052
download_size: 1107018606
dataset_size: 195456401
- config_name: et
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 21961619
num_examples: 87023
download_size: 1107018606
dataset_size: 21961619
- config_name: fr
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 147560734
num_examples: 418411
download_size: 1107018606
dataset_size: 147560734
- config_name: hr
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 154151689
num_examples: 629667
download_size: 1107018606
dataset_size: 154151689
- config_name: it
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 147520094
num_examples: 378325
download_size: 1107018606
dataset_size: 147520094
- config_name: lt
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 165319919
num_examples: 848018
download_size: 1107018606
dataset_size: 165319919
- config_name: nl
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 150737871
num_examples: 520664
download_size: 1107018606
dataset_size: 150737871
- config_name: pt
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 145627857
num_examples: 396773
download_size: 1107018606
dataset_size: 145627857
- config_name: sk
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 134174889
num_examples: 500135
download_size: 1107018606
dataset_size: 134174889
- config_name: sv
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 157058369
num_examples: 634881
download_size: 1107018606
dataset_size: 157058369
- config_name: tr
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 164456506
num_examples: 607324
download_size: 1107018606
dataset_size: 164456506
- config_name: zh
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 165056969
num_examples: 1570853
download_size: 1107018606
dataset_size: 165056969
- config_name: bg
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 190509195
num_examples: 559694
download_size: 1107018606
dataset_size: 190509195
- config_name: da
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 150551293
num_examples: 546440
download_size: 1107018606
dataset_size: 150551293
- config_name: en
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 145491677
num_examples: 423982
download_size: 1107018606
dataset_size: 145491677
- config_name: fa
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 180093656
num_examples: 492903
download_size: 1107018606
dataset_size: 180093656
- config_name: he
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 177231613
num_examples: 459933
download_size: 1107018606
dataset_size: 177231613
- config_name: hu
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 160702240
num_examples: 590218
download_size: 1107018606
dataset_size: 160702240
- config_name: ja
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 193679570
num_examples: 1691018
download_size: 1107018606
dataset_size: 193679570
- config_name: lv
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 76256241
num_examples: 331568
download_size: 1107018606
dataset_size: 76256241
- config_name: 'no'
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 152431612
num_examples: 552176
download_size: 1107018606
dataset_size: 152431612
- config_name: ro
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 96369897
num_examples: 285985
download_size: 1107018606
dataset_size: 96369897
- config_name: sl
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 148140079
num_examples: 521251
download_size: 1107018606
dataset_size: 148140079
- config_name: th
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 360409343
num_examples: 217631
download_size: 1107018606
dataset_size: 360409343
- config_name: uk
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 198251631
num_examples: 561373
download_size: 1107018606
dataset_size: 198251631
- config_name: combined
features:
- name: id
dtype: string
- name: lang
dtype: string
- name: words
sequence: string
- name: ner
sequence: string
splits:
- name: train
num_bytes: 6286855097
num_examples: 21070925
download_size: 1107018606
dataset_size: 6286855097
---
# Dataset Card for Polyglot-NER
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://sites.google.com/site/rmyeid/projects/polylgot-ner](https://sites.google.com/site/rmyeid/projects/polylgot-ner)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 45.39 GB
- **Size of the generated dataset:** 12.54 GB
- **Total amount of disk used:** 57.93 GB
### Dataset Summary
Polyglot-NER
A training dataset automatically generated from Wikipedia and Freebase the task
of named entity recognition. The dataset contains the basic Wikipedia based
training data for 40 languages we have (with coreference resolution) for the task of
named entity recognition. The details of the procedure of generating them is outlined in
Section 3 of the paper (https://arxiv.org/abs/1410.3791). Each config contains the data
corresponding to a different language. For example, "es" includes only spanish examples.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### ar
- **Size of downloaded dataset files:** 1.11 GB
- **Size of the generated dataset:** 183.55 MB
- **Total amount of disk used:** 1.29 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": "2",
"lang": "ar",
"ner": ["O", "O", "O", "O", "O", "O", "O", "O", "LOC", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "PER", "PER", "PER", "PER", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"],
"words": "[\"وفي\", \"مرحلة\", \"موالية\", \"أنشأت\", \"قبيلة\", \"مكناسة\", \"الزناتية\", \"مكناسة\", \"تازة\", \",\", \"وأقام\", \"بها\", \"المرابطون\", \"قلعة\", \"..."
}
```
#### bg
- **Size of downloaded dataset files:** 1.11 GB
- **Size of the generated dataset:** 190.51 MB
- **Total amount of disk used:** 1.30 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": "1",
"lang": "bg",
"ner": ["O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"],
"words": "[\"Дефиниция\", \"Наименованията\", \"\\\"\", \"книжовен\", \"\\\"/\\\"\", \"литературен\", \"\\\"\", \"език\", \"на\", \"български\", \"за\", \"тази\", \"кодифи..."
}
```
#### ca
- **Size of downloaded dataset files:** 1.11 GB
- **Size of the generated dataset:** 143.75 MB
- **Total amount of disk used:** 1.25 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": "2",
"lang": "ca",
"ner": "[\"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O\", \"O...",
"words": "[\"Com\", \"a\", \"compositor\", \"deixà\", \"un\", \"immens\", \"llegat\", \"que\", \"inclou\", \"8\", \"simfonies\", \"(\", \"1822\", \"),\", \"diverses\", ..."
}
```
#### combined
- **Size of downloaded dataset files:** 1.11 GB
- **Size of the generated dataset:** 6.29 GB
- **Total amount of disk used:** 7.39 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": "18",
"lang": "es",
"ner": ["O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"],
"words": "[\"Los\", \"cambios\", \"en\", \"la\", \"energía\", \"libre\", \"de\", \"Gibbs\", \"\\\\\", \"Delta\", \"G\", \"nos\", \"dan\", \"una\", \"cuantificación\", \"de..."
}
```
#### cs
- **Size of downloaded dataset files:** 1.11 GB
- **Size of the generated dataset:** 156.79 MB
- **Total amount of disk used:** 1.26 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"id": "3",
"lang": "cs",
"ner": ["O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O"],
"words": "[\"Historie\", \"Symfonická\", \"forma\", \"se\", \"rozvinula\", \"se\", \"především\", \"v\", \"období\", \"klasicismu\", \"a\", \"romantismu\", \",\", \"..."
}
```
### Data Fields
The data fields are the same among all splits.
#### ar
- `id`: a `string` feature.
- `lang`: a `string` feature.
- `words`: a `list` of `string` features.
- `ner`: a `list` of `string` features.
#### bg
- `id`: a `string` feature.
- `lang`: a `string` feature.
- `words`: a `list` of `string` features.
- `ner`: a `list` of `string` features.
#### ca
- `id`: a `string` feature.
- `lang`: a `string` feature.
- `words`: a `list` of `string` features.
- `ner`: a `list` of `string` features.
#### combined
- `id`: a `string` feature.
- `lang`: a `string` feature.
- `words`: a `list` of `string` features.
- `ner`: a `list` of `string` features.
#### cs
- `id`: a `string` feature.
- `lang`: a `string` feature.
- `words`: a `list` of `string` features.
- `ner`: a `list` of `string` features.
### Data Splits
| name | train |
|----------|---------:|
| ar | 339109 |
| bg | 559694 |
| ca | 372665 |
| combined | 21070925 |
| cs | 564462 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{polyglotner,
author = {Al-Rfou, Rami and Kulkarni, Vivek and Perozzi, Bryan and Skiena, Steven},
title = {{Polyglot-NER}: Massive Multilingual Named Entity Recognition},
journal = {{Proceedings of the 2015 {SIAM} International Conference on Data Mining, Vancouver, British Columbia, Canada, April 30- May 2, 2015}},
month = {April},
year = {2015},
publisher = {SIAM},
}
```
### Contributions
Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset. |
prachathai67k | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- topic-classification
paperswithcode_id: prachathai-67k
pretty_name: prachathai67k
dataset_info:
features:
- name: url
dtype: string
- name: date
dtype: string
- name: title
dtype: string
- name: body_text
dtype: string
- name: politics
dtype:
class_label:
names:
'0': neg
'1': pos
- name: human_rights
dtype:
class_label:
names:
'0': neg
'1': pos
- name: quality_of_life
dtype:
class_label:
names:
'0': neg
'1': pos
- name: international
dtype:
class_label:
names:
'0': neg
'1': pos
- name: social
dtype:
class_label:
names:
'0': neg
'1': pos
- name: environment
dtype:
class_label:
names:
'0': neg
'1': pos
- name: economics
dtype:
class_label:
names:
'0': neg
'1': pos
- name: culture
dtype:
class_label:
names:
'0': neg
'1': pos
- name: labor
dtype:
class_label:
names:
'0': neg
'1': pos
- name: national_security
dtype:
class_label:
names:
'0': neg
'1': pos
- name: ict
dtype:
class_label:
names:
'0': neg
'1': pos
- name: education
dtype:
class_label:
names:
'0': neg
'1': pos
config_name: prachathai67k
splits:
- name: train
num_bytes: 865848436
num_examples: 54379
- name: validation
num_bytes: 108641386
num_examples: 6721
- name: test
num_bytes: 110034036
num_examples: 6789
download_size: 254240975
dataset_size: 1084523858
---
# Dataset Card for `prachathai67k`
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/PyThaiNLP/prachathai-67k
- **Repository:** https://github.com/PyThaiNLP/prachathai-67k
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** https://github.com/PyThaiNLP/
### Dataset Summary
`prachathai-67k`: News Article Corpus and Multi-label Text Classificdation from Prachathai.com
The `prachathai-67k` dataset was scraped from the news site [Prachathai](prachathai.com). We filtered out those articles with less than 500 characters of body text, mostly images and cartoons. It contains 67,889 articles wtih 12 curated tags from August 24, 2004 to November 15, 2018. The dataset was originally scraped by [@lukkiddd](https://github.com/lukkiddd) and cleaned by [@cstorm125](https://github.com/cstorm125). Download the dataset [here](https://www.dropbox.com/s/fsxepdka4l2pr45/prachathai-67k.zip?dl=1). You can also see preliminary exploration in [exploration.ipynb](https://github.com/PyThaiNLP/prachathai-67k/blob/master/exploration.ipynb).
This dataset is a part of [pyThaiNLP](https://github.com/PyThaiNLP/) Thai text [classification-benchmarks](https://github.com/PyThaiNLP/classification-benchmarks). For the benchmark, we selected the following tags with substantial volume that resemble **classifying types of articles**:
* `การเมือง` - politics
* `สิทธิมนุษยชน` - human_rights
* `คุณภาพชีวิต` - quality_of_life
* `ต่างประเทศ` - international
* `สังคม` - social
* `สิ่งแวดล้อม` - environment
* `เศรษฐกิจ` - economics
* `วัฒนธรรม` - culture
* `แรงงาน` - labor
* `ความมั่นคง` - national_security
* `ไอซีที` - ict
* `การศึกษา` - education
### Supported Tasks and Leaderboards
multi-label text classification, language modeling
### Languages
Thai
## Dataset Structure
### Data Instances
{'body_text': '17 พ.ย. 2558 Blognone [1] รายงานว่า กลุ่มแฮคเกอร์ Anonymous ประกาศสงครามไซเบอร์กับกลุ่มหัวรุนแรงหลังจากกลุ่ม IS ออกมาประกาศว่าเป็นผู้อยู่เบื้องหลังการโจมตีกรุงปารีสในคืนวันศุกร์ที่ผ่านมา\n\n\nภาพในคลิปใน YouTube โฆษกของกลุ่มแฮคเกอร์สวมหน้ากากที่เป็นสัญลักษณ์ของกลุ่มได้ออกมาอ่านแถลงเป็นภาษาฝรั่งเศส มีใจความว่า จากการโจมตีของกลุ่ม IS ในกรุงปารีส กลุ่ม Anonymous ทั่วโลกจะตามล่ากลุ่ม IS เหมือนที่เคยทำตอนที่มีการโจมตีสำนักพิมพ์ Charlie Hebdo และครั้งนี้จะเป็นปฏิบัติการโจมตีครั้งใหญ่ที่สุดของกลุ่ม Anonymous เลย นอกจากนี้กลุ่ม Anonymous ยังแสดงความเสียใจต่อครอบครัวผู้สูญเสียในเหตุการณ์ครั้งนี้\nกลุ่ม Anonymous เคยประกาศสงครามกับกลุ่ม IS หลังจากการโจมตีสำนักพิมพ์ Charlie Hebdo ที่ฝรั่งเศสเมื่อต้นปีที่ผ่านมา ซึ่งครั้งนั้นกลุ่ม Anonymous อ้างว่าได้ระงับบัญชีผู้ใช้งานที่เกี่ยวข้องกับ IS ไปหลายพันบัญชี (อ่านรายละเอียดเพิ่มเติม จากBlognone ที่\xa0\xa0กลุ่มแฮคเกอร์ Anonymous ประกาศสงครามไซเบอร์ขอกวาดล้างพวก ISIS [2])', 'culture': 0, 'date': '2015-11-17 18:14', 'economics': 0, 'education': 0, 'environment': 0, 'human_rights': 0, 'ict': 1, 'international': 1, 'labor': 0, 'national_security': 0, 'politics': 0, 'quality_of_life': 0, 'social': 0, 'title': 'แฮคเกอร์ Anonymous ลั่นทำสงครามไซเบอร์ครั้งใหญ่สุดกับกลุ่ม IS', 'url': 'https://prachatai.com/print/62490'}
{'body_text': 'แถลงการณ์\n\n\xa0\n\nองค์การนักศึกษามหาวิทยาลัยธรรมศาสตร์\n\n\xa0\n\nมหาวิทยาลัยธรรมศาสตร์ก่อตั้งขึ้นภายใต้แนวคิดการให้การศึกษากับประชาชนเพื่อสนับสนุนการปกครองระบอบประชาธิปไตย อีกทั้งยังเป็นสถาบันหนึ่งที่อยู่เคียงข้างประชาชนมาโดยตลอด\n\n\xa0\n\nสถานการณ์สังคมไทยปัจจุบันได้เกิดความขัดแย้งทางการเมือง ทางแนวคิด จนลุกลามเป็นวิกฤตการณ์อันหาทางออกได้ยากยิ่ง องค์กรนักศึกษามหาวิทยาลัยธรรมศาสตร์ขอร้องเรียนและเสนอแนะต่อทุกฝ่าย โดยยึดหลักแนวทางตามรัฐธรรมนูญแห่งราชอาณาจักรไทย พ.ศ. ๒๕๕๐ อันเป็นกฎหมายสูงสุดในการจัดการปกครองรัฐ ที่มีผลบังคับใช้อยู่ในปัจจุบันซึ่งผ่านการประชามติจากปวงชนชาวไทยเมื่อวันที่ ๑๙ สิงหาคม พ.ศ. ๒๕๕๐ แล้วดังต่อนี้\n\n\xa0\n\n๑.การชุมชมโดยสงบและปราศจากอาวุธย่อมได้รับการคุ้มครองตามรัฐธรรมนูญ แต่หากการชุมนุมและเคลื่อนไหวของกลุ่มใดๆ มีการละเมิดสิทธิและเสรีภาพของผู้อื่นหรือก่อให้เกิดความเสียหายต่อชีวิตและทรัพย์สินของบุคคลและส่วนรวมนั้น ไม่สามารถกระทำได้ การใช้ความรุนแรง การกระทำอุกอาจต่างๆ ทั้งต่อบุคคลและทรัพย์สิน การยั่วยุ ปลุกระดมเพื่อหวังผลในการปะทะต่อสู้ จึงควรได้รับการกล่าวโทษ\n\n\xa0\n\nดังนั้นทั้งกลุ่มพันธมิตรประชาชนเพื่อประชาธิปไตย (พธม.) และกลุ่มแนวร่วมประชาธิปไตยไม่เอาเผด็จการแห่งชาติ (นปช.) จึงควรยอมรับกระบวนการตามกฎหมาย และหากถูกกล่าวหาไม่ว่ากรณีใดๆ ก็ควรพิสูจน์ความบริสุทธิ์โดยใช้กระบวนการยุติธรรม และหากจะยังชุมนุมต่อไปก็ยังคงทำได้ภายใต้บทบัญญัติแห่งกฎหมาย\n\n\xa0\n\nองค์กรนักศึกษามหาวิทยาลัยธรรมศาสตร์ จึงร้องขอให้หน่วยงานต่างๆ ที่เกี่ยวข้องดำเนินการตามกระบวนการทางกฎหมายกับการกระทำที่ผิดบทบัญญัติแห่งกฎหมายที่ทุกฝ่ายได้กระทำไป\n\n\xa0\n\n๒.นายสมัคร สุนทรเวช นายกรัฐมนตรี ไม่มีความเหมาะสมในการบริหารราชการแผ่นดินขาดหลักธรรมาภิบาล แต่ทั้งนี้นายสมัคร สุนทรเวช ยังคงยืนยันและกล่าวอ้างความชอบธรรมตามระบอบประชาธิปไตยภายใต้รัฐธรรมนูญ โดยไม่คำนึงถึงกระแสเรียกร้องใดๆ อันส่งผลให้ความขัดแย้งทางสังคมยิ่งบานปลายจนกลายเป็นวิกฤตการณ์เช่นปัจจุบัน ซึ่งก่อให้เกิดความเสียหายต่อประเทศแนวโน้มจะคลี่คลาย\n\n\xa0\n\nองค์การนักศึกษามหาวิทยาลัยธรรมศาสตร์ จึงเห็นว่า ควรใช้สิทธิตามรัฐธรรมนูญแห่งราชอาณาจักรไทย พุทธศักราช ๒๕๕๐ มาตรา ๑๖๔ โดยการเข้าชื่อเพื่อร้องต่อประธานวุฒิสภาเพื่อให้มีมติตามมาตรา ๒๗๔ ให้ถอดถอนนายสมัคร สุนทรเวช ออกจากตำแหน่งนายกรัฐมนตรีตามมาตรา ๒๗๐ ณ ลานโพ มหาวิทยาลัยธรรมศาสตร์ ท่าพระจันทร์ อาคารเรียนรวมสังคมศาสตร์ อาคารปิยชาติ และตึกกิจกรรมนักศึกษา มหาวิทยาลัยธรรมศาสตร์ ศูนย์รังสิต\n\n\xa0\n\n\xa0\n\nด้วยความสมานฉันท์\n\nองค์การนักศึกษามหาวิทยาลัยธรรมศาสตร์', 'culture': 0, 'date': '2008-09-06 03:36', 'economics': 0, 'education': 0, 'environment': 0, 'human_rights': 0, 'ict': 0, 'international': 0, 'labor': 0, 'national_security': 0, 'politics': 1, 'quality_of_life': 0, 'social': 0, 'title': 'แถลงการณ์ อมธ.แนะใช้สิทธิ ตาม รธน.เข้าชื่อร้องต่อประธานวุฒิสภาถอดถอน "สมัคร" จากตำแหน่งนายกฯ', 'url': 'https://prachatai.com/print/18038'}
### Data Fields
- `url`: url of the article
- `date`: date the article was published
- `title`: title of the article
- `body_text`: body text of the article
- `politics`: 1 if sample has this tag else 0
- `human_rights`: 1 if sample has this tag else 0
- `quality_of_life`: 1 if sample has this tag else 0
- `international`: 1 if sample has this tag else 0
- `social`: 1 if sample has this tag else 0
- `environment`: 1 if sample has this tag else 0
- `economics`: 1 if sample has this tag else 0
- `culture`: 1 if sample has this tag else 0
- `labor`: 1 if sample has this tag else 0
- `national_security`: 1 if sample has this tag else 0
- `ict`: 1 if sample has this tag else 0
- `education`: 1 if sample has this tag else 0
### Data Splits
| | train | valid | test |
|-------------------|-------|--------|------|
| # articles | 54379 | 6721 | 6789 |
| politics | 31401 | 3852 | 3842 |
| human_rights | 12061 | 1458 | 1511 |
| quality_of_life | 9037 | 1144 | 1127 |
| international | 6432 | 828 | 834 |
| social | 6321 | 782 | 789 |
| environment | 6157 | 764 | 772 |
| economics | 3994 | 487 | 519 |
| culture | 3279 | 388 | 398 |
| labor | 2905 | 375 | 350 |
| national_security | 2865 | 339 | 338 |
| ict | 2326 | 285 | 292 |
| education | 2093 | 248 | 255 |
## Dataset Creation
### Curation Rationale
The data was scraped from the news site [Prachathai](prachathai.com) from August 24, 2004 to November 15, 2018. The initial intention was to use the dataset as a benchmark for Thai text classification. Due to the size of the dataset, it can also be used for language modeling.
### Source Data
#### Initial Data Collection and Normalization
67,889 articles wtih 51,797 tags were scraped from the news site [Prachathai](prachathai.com) from August 24, 2004 to November 15, 2018. We filtered out those articles with less than 500 characters of body text, mostly images and cartoons.
#### Who are the source language producers?
Prachathai.com
### Annotations
#### Annotation process
Tags are annotated for the news website Prachathai.com
#### Who are the annotators?
We assume that the reporters who wrote the articles or other Prachathai staff gave each article its tags.
### Personal and Sensitive Information
We do not expect any personal and sensitive information to be present since all data are public news articles.
## Considerations for Using the Data
### Social Impact of Dataset
- classification benchmark for multi-label Thai text classification
### Discussion of Biases
Prachathai.com is a left-leaning, human-right-focused news site, and thus unusual news labels such as human rights and quality of life. The news articles are expected to be left-leaning in contents.
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
PyThaiNLP
### Licensing Information
CC-BY-NC
### Citation Information
@misc{prachathai67k,
author = {cstorm125, lukkiddd },
title = {prachathai67k},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
howpublished={\\url{https://github.com/PyThaiNLP/prachathai-67k}},
}
### Contributions
Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset. |
pragmeval | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
- 1K<n<10K
- n<1K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
pretty_name: pragmeval
configs:
- emergent
- emobank-arousal
- emobank-dominance
- emobank-valence
- gum
- mrda
- pdtb
- persuasiveness-claimtype
- persuasiveness-eloquence
- persuasiveness-premisetype
- persuasiveness-relevance
- persuasiveness-specificity
- persuasiveness-strength
- sarcasm
- squinky-formality
- squinky-implicature
- squinky-informativeness
- stac
- switchboard
- verifiability
dataset_info:
- config_name: verifiability
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': experiential
'1': unverifiable
'2': non-experiential
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 592520
num_examples: 5712
- name: validation
num_bytes: 65215
num_examples: 634
- name: test
num_bytes: 251799
num_examples: 2424
download_size: 5330724
dataset_size: 909534
- config_name: emobank-arousal
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': low
'1': high
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 567660
num_examples: 5470
- name: validation
num_bytes: 71221
num_examples: 684
- name: test
num_bytes: 69276
num_examples: 683
download_size: 5330724
dataset_size: 708157
- config_name: switchboard
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': Response Acknowledgement
'1': Uninterpretable
'2': Or-Clause
'3': Reject
'4': Statement-non-opinion
'5': 3rd-party-talk
'6': Repeat-phrase
'7': Hold Before Answer/Agreement
'8': Signal-non-understanding
'9': Offers, Options Commits
'10': Agree/Accept
'11': Dispreferred Answers
'12': Hedge
'13': Action-directive
'14': Tag-Question
'15': Self-talk
'16': Yes-No-Question
'17': Rhetorical-Question
'18': No Answers
'19': Open-Question
'20': Conventional-closing
'21': Other Answers
'22': Acknowledge (Backchannel)
'23': Wh-Question
'24': Declarative Wh-Question
'25': Thanking
'26': Yes Answers
'27': Affirmative Non-yes Answers
'28': Declarative Yes-No-Question
'29': Backchannel in Question Form
'30': Apology
'31': Downplayer
'32': Conventional-opening
'33': Collaborative Completion
'34': Summarize/Reformulate
'35': Negative Non-no Answers
'36': Statement-opinion
'37': Appreciation
'38': Other
'39': Quotation
'40': Maybe/Accept-part
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 1021220
num_examples: 18930
- name: validation
num_bytes: 116058
num_examples: 2113
- name: test
num_bytes: 34013
num_examples: 649
download_size: 5330724
dataset_size: 1171291
- config_name: persuasiveness-eloquence
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': low
'1': high
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 153946
num_examples: 725
- name: validation
num_bytes: 19376
num_examples: 91
- name: test
num_bytes: 18379
num_examples: 90
download_size: 5330724
dataset_size: 191701
- config_name: mrda
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': Declarative-Question
'1': Statement
'2': Reject
'3': Or-Clause
'4': 3rd-party-talk
'5': Continuer
'6': Hold Before Answer/Agreement
'7': Assessment/Appreciation
'8': Signal-non-understanding
'9': Floor Holder
'10': Sympathy
'11': Dispreferred Answers
'12': Reformulate/Summarize
'13': Exclamation
'14': Interrupted/Abandoned/Uninterpretable
'15': Expansions of y/n Answers
'16': Action-directive
'17': Tag-Question
'18': Accept
'19': Rhetorical-question Continue
'20': Self-talk
'21': Rhetorical-Question
'22': Yes-No-question
'23': Open-Question
'24': Rising Tone
'25': Other Answers
'26': Commit
'27': Wh-Question
'28': Repeat
'29': Follow Me
'30': Thanking
'31': Offer
'32': About-task
'33': Reject-part
'34': Affirmative Non-yes Answers
'35': Apology
'36': Downplayer
'37': Humorous Material
'38': Accept-part
'39': Collaborative Completion
'40': Mimic Other
'41': Understanding Check
'42': Misspeak Self-Correction
'43': Or-Question
'44': Topic Change
'45': Negative Non-no Answers
'46': Floor Grabber
'47': Correct-misspeaking
'48': Maybe
'49': Acknowledge-answer
'50': Defending/Explanation
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 963913
num_examples: 14484
- name: validation
num_bytes: 111813
num_examples: 1630
- name: test
num_bytes: 419797
num_examples: 6459
download_size: 5330724
dataset_size: 1495523
- config_name: gum
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': preparation
'1': evaluation
'2': circumstance
'3': solutionhood
'4': justify
'5': result
'6': evidence
'7': purpose
'8': concession
'9': elaboration
'10': background
'11': condition
'12': cause
'13': restatement
'14': motivation
'15': antithesis
'16': no_relation
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 270401
num_examples: 1700
- name: validation
num_bytes: 35405
num_examples: 259
- name: test
num_bytes: 40334
num_examples: 248
download_size: 5330724
dataset_size: 346140
- config_name: emergent
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': observing
'1': for
'2': against
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 313257
num_examples: 2076
- name: validation
num_bytes: 38948
num_examples: 259
- name: test
num_bytes: 38842
num_examples: 259
download_size: 5330724
dataset_size: 391047
- config_name: persuasiveness-relevance
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': low
'1': high
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 153158
num_examples: 725
- name: validation
num_bytes: 19663
num_examples: 91
- name: test
num_bytes: 18880
num_examples: 90
download_size: 5330724
dataset_size: 191701
- config_name: persuasiveness-specificity
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': low
'1': high
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 106594
num_examples: 504
- name: validation
num_bytes: 13766
num_examples: 62
- name: test
num_bytes: 12712
num_examples: 62
download_size: 5330724
dataset_size: 133072
- config_name: persuasiveness-strength
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': low
'1': high
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 79679
num_examples: 371
- name: validation
num_bytes: 10052
num_examples: 46
- name: test
num_bytes: 10225
num_examples: 46
download_size: 5330724
dataset_size: 99956
- config_name: emobank-dominance
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': low
'1': high
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 660303
num_examples: 6392
- name: validation
num_bytes: 86802
num_examples: 798
- name: test
num_bytes: 83319
num_examples: 798
download_size: 5330724
dataset_size: 830424
- config_name: squinky-implicature
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': low
'1': high
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 471552
num_examples: 3724
- name: validation
num_bytes: 58087
num_examples: 465
- name: test
num_bytes: 56549
num_examples: 465
download_size: 5330724
dataset_size: 586188
- config_name: sarcasm
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': notsarc
'1': sarc
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 2177332
num_examples: 3754
- name: validation
num_bytes: 257834
num_examples: 469
- name: test
num_bytes: 269724
num_examples: 469
download_size: 5330724
dataset_size: 2704890
- config_name: squinky-formality
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': low
'1': high
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 459721
num_examples: 3622
- name: validation
num_bytes: 59921
num_examples: 453
- name: test
num_bytes: 58242
num_examples: 452
download_size: 5330724
dataset_size: 577884
- config_name: stac
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': Comment
'1': Contrast
'2': Q_Elab
'3': Parallel
'4': Explanation
'5': Narration
'6': Continuation
'7': Result
'8': Acknowledgement
'9': Alternation
'10': Question_answer_pair
'11': Correction
'12': Clarification_question
'13': Conditional
'14': Sequence
'15': Elaboration
'16': Background
'17': no_relation
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 645969
num_examples: 11230
- name: validation
num_bytes: 71400
num_examples: 1247
- name: test
num_bytes: 70451
num_examples: 1304
download_size: 5330724
dataset_size: 787820
- config_name: pdtb
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': Synchrony
'1': Contrast
'2': Asynchronous
'3': Conjunction
'4': List
'5': Condition
'6': Pragmatic concession
'7': Restatement
'8': Pragmatic cause
'9': Alternative
'10': Pragmatic condition
'11': Pragmatic contrast
'12': Instantiation
'13': Exception
'14': Cause
'15': Concession
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 2968638
num_examples: 12907
- name: validation
num_bytes: 276997
num_examples: 1204
- name: test
num_bytes: 235851
num_examples: 1085
download_size: 5330724
dataset_size: 3481486
- config_name: persuasiveness-premisetype
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': testimony
'1': warrant
'2': invented_instance
'3': common_knowledge
'4': statistics
'5': analogy
'6': definition
'7': real_example
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 122631
num_examples: 566
- name: validation
num_bytes: 15920
num_examples: 71
- name: test
num_bytes: 14395
num_examples: 70
download_size: 5330724
dataset_size: 152946
- config_name: squinky-informativeness
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': low
'1': high
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 464855
num_examples: 3719
- name: validation
num_bytes: 60447
num_examples: 465
- name: test
num_bytes: 56872
num_examples: 464
download_size: 5330724
dataset_size: 582174
- config_name: persuasiveness-claimtype
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype:
class_label:
names:
'0': Value
'1': Fact
'2': Policy
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 31259
num_examples: 160
- name: validation
num_bytes: 3803
num_examples: 20
- name: test
num_bytes: 3717
num_examples: 19
download_size: 5330724
dataset_size: 38779
- config_name: emobank-valence
features:
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': low
'1': high
- name: idx
dtype: int32
splits:
- name: train
num_bytes: 539652
num_examples: 5150
- name: validation
num_bytes: 62809
num_examples: 644
- name: test
num_bytes: 66178
num_examples: 643
download_size: 5330724
dataset_size: 668639
---
# Dataset Card for pragmeval
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@sileod](https://github.com/sileod) for adding this dataset. |
proto_qa | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
- other
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
- open-domain-qa
paperswithcode_id: protoqa
pretty_name: ProtoQA
dataset_info:
- config_name: proto_qa
features:
- name: normalized-question
dtype: string
- name: question
dtype: string
- name: answer-clusters
sequence:
- name: count
dtype: int32
- name: clusterid
dtype: string
- name: answers
sequence: string
- name: answerstrings
sequence: string
- name: totalcount
dtype: int32
- name: id
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 3943484
num_examples: 8782
- name: validation
num_bytes: 472121
num_examples: 980
download_size: 7352932
dataset_size: 4415605
- config_name: proto_qa_cs
features:
- name: normalized-question
dtype: string
- name: question
dtype: string
- name: answers-cleaned
sequence:
- name: count
dtype: int32
- name: clusterid
dtype: string
- name: answers
sequence: string
- name: answerstrings
sequence: string
- name: totalcount
dtype: int32
- name: id
dtype: string
- name: source
dtype: string
splits:
- name: validation
num_bytes: 84466
num_examples: 52
download_size: 115704
dataset_size: 84466
- config_name: proto_qa_cs_assessments
features:
- name: question
dtype: string
- name: assessments
sequence: string
splits:
- name: validation
num_bytes: 12473
num_examples: 52
download_size: 24755
dataset_size: 12473
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Interactive Demo:** [Interactive demo](http://protoqa.com)
- **Repository:** [proto_qa repository](https://github.com/iesl/protoqa-data)
- **Paper:** [proto_qa paper](https://arxiv.org/pdf/2005.00771.pdf)
- **Point of Contact:** [Michael Boratko](mailto:mboratko@cs.umass.edu)
[Xiang Lorraine Li](mailto:xiangl@cs.umass.edu)
[Tim O’Gorman](mailto:togorman@cs.umass.edu)
[Rajarshi Das](mailto:rajarshi@cs.umass.edu)
[Dan Le](mailto:dhle@cs.umass.edu)
[Andrew McCallum](mailto:mccallum@cs.umass.edu)
### Dataset Summary
This dataset is for studying computational models trained to reason about prototypical situations. It is anticipated that still would not lead to usage in a downstream task, but as a way of studying the knowledge (and biases) of prototypical situations already contained in pre-trained models. The data it is partially based on (Family Feud).
Using deterministic filtering a sampling from a larger set of all transcriptions was built. Scraped data was acquired through fan transcriptions at [family feud](https://www.familyfeudinfo.com) and [family feud friends](http://familyfeudfriends.arjdesigns.com/); crowdsourced data was acquired with FigureEight (now Appen)
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The text in the dataset is in English
## Dataset Structure
### Data Instances
**What do the instances that comprise the dataset represent?**<br>
Each represents a survey question from Family Feud game and reported answer clusters
**How many instances are there in total?**<br>
9789 instances
**What data does each instance consist of?**<br>
Each instance is a question, a set of answers, and a count associated with each answer.
### Data Fields
**Data Files**<br>
Each line is a json dictionary, in which:<br>
**question** contains the question (in original and a normalized form)<br>
**answerstrings** contains the original answers provided by survey respondents (when available), along with the counts for each string. Because the FamilyFeud data has only cluster names rather than strings, those cluster names are included with 0 weight.<br>
**answer-clusters** list of clusters, with the count of each cluster and the strings included in that cluster. Each cluster is given a unique ID that can be linked to in the assessment files.
The simplified configuration includes:
- `question`: contains the original question
- `normalized-question`: contains the question in normalized form
- `totalcount`: unique identifier of the comment (can be used to look up the entry in the raw dataset)
- `id`: unique identifier of the commen
- `source`: unique identifier of the commen
- `answerstrings`: unique identifier of the commen
- `answer-clusters | answers-cleaned`: list clusters of:
* `clusterid`: Each cluster is given a unique ID that can be linked to in the assessment files
* `count`: the count of each cluster
* `answers`: the strings included in that cluster
In addition to the above, there is crowdsourced assessments file. The config "proto_qa_cs_assessments" provides mappings from additional human and model answers to clusters, to evaluate different assessment methods.
**Assessment files**<br>
The file **data/dev/crowdsource_dev.assessments.jsonl** contains mappings from additional human and model answers to clusters, to evaluate different assessment methods.
Each line contains:<br>
* `question`: contains the ID of the question
* `assessments`: maps individual strings to one of three options, either the answer cluster id, "invalid" if the answer is judged to be bad, or "valid_new_cluster" if the answer is valid but does not match any existing clusters.
### Data Splits
* proto_qa `Train` : 8781 instances for training or fine-tuning scraped from Family Feud fan sites (see paper). Scraped data has answer clusters with sizes, but only has a single string per cluster (corresponding to the original cluster name
* proto_qa `Validation` : 979 instances sampled from the same Family Feud data, for use in model validation and development.
* proto_qa_cs `Validation` :: 51 questions collected with exhaustive answer collection and manual clustering, matching the details of the eval test set (roughly 100 human answers per question)
**data/dev/crowdsource_dev.assessments.jsonl**: assessment file (format described above) for study of assessment methods.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
**How was the data associated with each instance acquired?**<br>
Scraped data was acquired through fan transcriptions at https://www.familyfeudinfo.com and http://familyfeudfriends.arjdesigns.com/ ; crowdsourced data was acquired with FigureEight (now Appen)
**If the dataset is a sample from a larger set, what was the sampling strategy?**<br>
Deterministic filtering was used (noted elsewhere), but no probabilistic sampling was used.
**Who was involved in the data collection process (e.g., students,crowdworkers , contractors) and how were they compensated?**<br>
Crowdworkers were used in the evalaution dataset. Time per task was calculated and per-task cost was set to attempt to provide a living wage
**Over what timeframe was the data collected?**<br>
Crowdsource answers were collected between Fall of 2018 and Spring of 2019. Scraped data covers question-answer pairs collected since the origin of the show in 1976
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
**Was any preprocessing/cleaning/labeling of the data done?**<br>
Obvious typos in the crowdsourced answer set were corrected
#### Who are the annotators?
The original question-answer pairs were generated by surveys of US English-speakers in a period from 1976 to present day. Crowd-sourced evaluation was constrained geographically to US English speakers but not otherwise constrained. Additional demographic data was not collected.
### Personal and Sensitive Information
**Does the dataset contain data that might be considered sensitive in any way?**<br>
As the questions address prototypical/stereotypical activities, models trained on more offensive material (such as large language models) may provide offensive answers to such questions. While we had found a few questions which we worried would actually encourage models to provide offensive answers, we cannot guarantee that the data is clean of such questions. Even a perfectly innocent version of this dataset would be encouraging models to express generalizations about situations, and therefore may provoke offensive material that is oontained in language models
**Does the dataset contain data that might be considered confidential?**<br>
The data does not concern individuals and thus does not contain any information to identify persons. Crowdsourced answers do not provide any user identifiers.
## Considerations for Using the Data
### Social Impact of Dataset
**Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety?**<br>
Not egregiously so (questions are all designed to be shown on television or replications thereof),
### Discussion of Biases
**Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses?**
<br>All original questions were written with US television audiences in mind, and therefore characterize prototypical situations with a specific lens. Any usages which deploy this to actually model prototypical situations globally will carry that bias.
**Are there tasks for which the dataset should not be used?**
<br>We caution regarding free-form use of this dataset for interactive "commonsense question answering" purposes without more study of the biases and stereotypes learned by such models.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
The listed authors are maintaining/supporting the dataset. They pledge to help support issues, but cannot guarantee long-term support
### Licensing Information
The Proto_qa dataset is licensed under the [Creative Commons Attribution 4.0 International](https://github.com/iesl/protoqa-data/blob/master/LICENSE)
### Citation Information
```
@InProceedings{
huggingface:dataset,
title = {ProtoQA: A Question Answering Dataset for Prototypical Common-Sense Reasoning},
authors = {Michael Boratko, Xiang Lorraine Li, Tim O’Gorman, Rajarshi Das, Dan Le, Andrew McCallum},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {https://github.com/iesl/protoqa-data},
}
```
### Contributions
Thanks to [@bpatidar](https://github.com/bpatidar) for adding this dataset. |
psc | ---
annotations_creators:
- expert-generated
language_creators:
- other
language:
- pl
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
pretty_name: psc
dataset_info:
features:
- name: extract_text
dtype: string
- name: summary_text
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 5026582
num_examples: 4302
- name: test
num_bytes: 1292103
num_examples: 1078
download_size: 2357808
dataset_size: 6318685
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
http://zil.ipipan.waw.pl/PolishSummariesCorpus
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The Polish Summaries Corpus contains news articles and their summaries. We used summaries of the same article as positive pairs and sampled the most similar summaries of different articles as negatives.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Polish
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- extract_text: text to summarise
- summary_text: summary of extracted text
- label: 1 indicates summary is similar, 0 means that it is not similar
### Data Splits
Data is splitted in train and test dataset. Test dataset doesn't have label column, so -1 is set instead.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
CC BY-SA 3.0
### Citation Information
@inproceedings{ogro:kop:14:lrec,
title={The {P}olish {S}ummaries {C}orpus},
author={Ogrodniczuk, Maciej and Kope{\'c}, Mateusz},
booktitle = "Proceedings of the Ninth International {C}onference on {L}anguage {R}esources and {E}valuation, {LREC}~2014",
year = "2014",
}
### Contributions
Thanks to [@abecadel](https://github.com/abecadel) for adding this dataset. |
ptb_text_only | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- other
license_details: LDC User Agreement for Non-Members
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: null
pretty_name: Penn Treebank
dataset_info:
features:
- name: sentence
dtype: string
config_name: penn_treebank
splits:
- name: train
num_bytes: 5143706
num_examples: 42068
- name: test
num_bytes: 453710
num_examples: 3761
- name: validation
num_bytes: 403156
num_examples: 3370
download_size: 5951345
dataset_size: 6000572
---
# Dataset Card for Penn Treebank
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://catalog.ldc.upenn.edu/LDC99T42
- **Repository:** 'https://raw.githubusercontent.com/wojzaremba/lstm/master/data/ptb.train.txt',
'https://raw.githubusercontent.com/wojzaremba/lstm/master/data/ptb.valid.txt',
'https://raw.githubusercontent.com/wojzaremba/lstm/master/data/ptb.test.txt'
- **Paper:** https://www.aclweb.org/anthology/J93-2004.pdf
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This is the Penn Treebank Project: Release 2 CDROM, featuring a million words of 1989 Wall Street Journal material.
The rare words in this version are already replaced with <unk> token. The numbers are replaced with <N> token.
### Supported Tasks and Leaderboards
Language Modelling
### Languages
The text in the dataset is in American English
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Dataset provided for research purposes only. Please check dataset license for additional information.
### Citation Information
@article{marcus-etal-1993-building,
title = "Building a Large Annotated Corpus of {E}nglish: The {P}enn {T}reebank",
author = "Marcus, Mitchell P. and
Santorini, Beatrice and
Marcinkiewicz, Mary Ann",
journal = "Computational Linguistics",
volume = "19",
number = "2",
year = "1993",
url = "https://www.aclweb.org/anthology/J93-2004",
pages = "313--330",
}
### Contributions
Thanks to [@harshalmittal4](https://github.com/harshalmittal4) for adding this dataset. |
pubmed | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
- text-classification
task_ids:
- language-modeling
- masked-language-modeling
- text-scoring
- topic-classification
paperswithcode_id: pubmed
pretty_name: PubMed
tags:
- citation-estimation
dataset_info:
- config_name: '2023'
features:
- name: MedlineCitation
struct:
- name: PMID
dtype: int32
- name: DateCompleted
struct:
- name: Year
dtype: int32
- name: Month
dtype: int32
- name: Day
dtype: int32
- name: NumberOfReferences
dtype: int32
- name: DateRevised
struct:
- name: Year
dtype: int32
- name: Month
dtype: int32
- name: Day
dtype: int32
- name: Article
struct:
- name: Abstract
struct:
- name: AbstractText
dtype: string
- name: ArticleTitle
dtype: string
- name: AuthorList
struct:
- name: Author
sequence:
- name: LastName
dtype: string
- name: ForeName
dtype: string
- name: Initials
dtype: string
- name: CollectiveName
dtype: string
- name: Language
dtype: string
- name: GrantList
struct:
- name: Grant
sequence:
- name: GrantID
dtype: string
- name: Agency
dtype: string
- name: Country
dtype: string
- name: PublicationTypeList
struct:
- name: PublicationType
sequence: string
- name: MedlineJournalInfo
struct:
- name: Country
dtype: string
- name: ChemicalList
struct:
- name: Chemical
sequence:
- name: RegistryNumber
dtype: string
- name: NameOfSubstance
dtype: string
- name: CitationSubset
dtype: string
- name: MeshHeadingList
struct:
- name: MeshHeading
sequence:
- name: DescriptorName
dtype: string
- name: QualifierName
dtype: string
- name: PubmedData
struct:
- name: ArticleIdList
sequence:
- name: ArticleId
sequence: string
- name: PublicationStatus
dtype: string
- name: History
struct:
- name: PubMedPubDate
sequence:
- name: Year
dtype: int32
- name: Month
dtype: int32
- name: Day
dtype: int32
- name: ReferenceList
sequence:
- name: Citation
dtype: string
- name: CitationId
dtype: int32
splits:
- name: train
num_bytes: 52199025303
num_examples: 34960700
download_size: 41168762331
dataset_size: 52199025303
---
# Dataset Card for PubMed
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** : [https://www.nlm.nih.gov/databases/download/pubmed_medline.html]()
- **Documentation:** : [https://www.nlm.nih.gov/databases/download/pubmed_medline_documentation.html]()
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
NLM produces a baseline set of MEDLINE/PubMed citation records in XML format for download on an annual basis. The annual baseline is released in December of each year. Each day, NLM produces update files that include new, revised and deleted citations. See our documentation page for more information.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
- English
## Dataset Structure
Bear in mind the data comes from XML that have various tags that are hard to reflect
in a concise JSON format. Tags and list are kind of non "natural" to XML documents
leading this library to make some choices regarding data. "Journal" info was dropped
altogether as it would have led to many fields being empty all the time.
The hierarchy is also a bit unnatural but the choice was made to keep as close as
possible to the original data for future releases that may change schema from NLM's side.
Author has been kept and contains either "ForeName", "LastName", "Initials", or "CollectiveName".
(All the fields will be present all the time, but only some will be filled)
### Data Instances
```json
{
"MedlineCitation": {
"PMID": 0,
"DateCompleted": {"Year": 0, "Month": 0, "Day": 0},
"NumberOfReferences": 0,
"DateRevised": {"Year": 0, "Month": 0, "Day": 0},
"Article": {
"Abstract": {"AbstractText": "Some abstract (can be missing)" },
"ArticleTitle": "Article title",
"AuthorList": {"Author": [
{"FirstName": "John", "ForeName": "Doe", "Initials": "JD", "CollectiveName": ""}
{"CollectiveName": "The Manhattan Project", "FirstName": "", "ForeName": "", "Initials": ""}
]},
"Language": "en",
"GrantList": {
"Grant": [],
},
"PublicationTypeList": {"PublicationType": []},
},
"MedlineJournalInfo": {"Country": "France"},
"ChemicalList": {"Chemical": [{
"RegistryNumber": "XX",
"NameOfSubstance": "Methanol"
}]},
"CitationSubset": "AIM",
"MeshHeadingList": {
"MeshHeading": [],
},
},
"PubmedData": {
"ArticleIdList": {"ArticleId": "10.1002/bjs.1800650203"},
"PublicationStatus": "ppublish",
"History": {"PubMedPubDate": [{"Year": 0, "Month": 0, "Day": 0}]},
"ReferenceList": [{"Citation": "Somejournal", "CitationId": 01}],
},
}
```
### Data Fields
Main Fields will probably interest people are:
- "MedlineCitation" > "Article" > "AuthorList" > "Author"
- "MedlineCitation" > "Article" > "Abstract" > "AbstractText"
- "MedlineCitation" > "Article" > "Article Title"
- "MedlineCitation" > "ChemicalList" > "Chemical"
- "MedlineCitation" > "NumberOfReferences"
### Data Splits
There are no splits in this dataset. It is given as is.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[https://www.nlm.nih.gov/databases/download/pubmed_medline_faq.html]()
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[https://www.nlm.nih.gov/databases/download/terms_and_conditions.html]()
### Citation Information
[Courtesy of the U.S. National Library of Medicine](https://www.nlm.nih.gov/databases/download/terms_and_conditions.html).
### Contributions
Thanks to [@Narsil](https://github.com/Narsil) for adding this dataset.
|
pubmed_qa | ---
annotations_creators:
- expert-generated
- machine-generated
language_creators:
- expert-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10K<n<100K
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
paperswithcode_id: pubmedqa
pretty_name: PubMedQA
configs:
- pqa_artificial
- pqa_labeled
- pqa_unlabeled
dataset_info:
- config_name: pqa_labeled
features:
- name: pubid
dtype: int32
- name: question
dtype: string
- name: context
sequence:
- name: contexts
dtype: string
- name: labels
dtype: string
- name: meshes
dtype: string
- name: reasoning_required_pred
dtype: string
- name: reasoning_free_pred
dtype: string
- name: long_answer
dtype: string
- name: final_decision
dtype: string
splits:
- name: train
num_bytes: 2089200
num_examples: 1000
download_size: 687882700
dataset_size: 2089200
- config_name: pqa_unlabeled
features:
- name: pubid
dtype: int32
- name: question
dtype: string
- name: context
sequence:
- name: contexts
dtype: string
- name: labels
dtype: string
- name: meshes
dtype: string
- name: long_answer
dtype: string
splits:
- name: train
num_bytes: 125938502
num_examples: 61249
download_size: 687882700
dataset_size: 125938502
- config_name: pqa_artificial
features:
- name: pubid
dtype: int32
- name: question
dtype: string
- name: context
sequence:
- name: contexts
dtype: string
- name: labels
dtype: string
- name: meshes
dtype: string
- name: long_answer
dtype: string
- name: final_decision
dtype: string
splits:
- name: train
num_bytes: 443554667
num_examples: 211269
download_size: 687882700
dataset_size: 443554667
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [PUBMED_QA homepage](https://pubmedqa.github.io/ )
- **Repository:** [PUBMED_QA repository](https://github.com/pubmedqa/pubmedqa)
- **Paper:** [PUBMED_QA: A Dataset for Biomedical Research Question Answering](https://arxiv.org/abs/1909.06146)
- **Leaderboard:** [PUBMED_QA: Leaderboard](https://pubmedqa.github.io/)
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@tuner007](https://github.com/tuner007) for adding this dataset. |
py_ast | ---
pretty_name: PyAst
annotations_creators:
- machine-generated
language_creators:
- found
language:
- code
license:
- bsd-2-clause
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text2text-generation
- text-generation
- fill-mask
task_ids: []
paperswithcode_id: null
tags:
- code-modeling
- code-generation
dataset_info:
features:
- name: ast
sequence:
- name: type
dtype: string
- name: value
dtype: string
- name: children
sequence: int32
config_name: ast
splits:
- name: train
num_bytes: 1870790180
num_examples: 100000
- name: test
num_bytes: 907514993
num_examples: 50000
download_size: 526642289
dataset_size: 2778305173
---
# Dataset Card for [py_ast]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **homepage**: [py150](https://www.sri.inf.ethz.ch/py150)
- **Paper**: [Probabilistic Model for Code with Decision Trees](https://www.semanticscholar.org/paper/Probabilistic-model-for-code-with-decision-trees-Raychev-Bielik/62e176977d439aac2e2d7eca834a7a99016dfcaf)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The dataset consists of parsed ASTs that were used to train and evaluate the DeepSyn tool.
The Python programs are collected from GitHub repositories
by removing duplicate files, removing project forks (copy of another existing repository),
keeping only programs that parse and have at most 30'000 nodes in the AST and
we aim to remove obfuscated files
### Supported Tasks and Leaderboards
Code Representation, Unsupervised Learning
### Languages
Python
## Dataset Structure
### Data Instances
A typical datapoint contains an AST of a python program, parsed.
The main key is `ast` wherein every program's AST is stored.
Each children would have,
`type` which will formulate the type of the node.
`children` which enumerates if a given node has children(non-empty list).
`value`, if the given node has any hardcoded value(else "N/A").
An example would be,
'''
[ {"type":"Module","children":[1,4]},{"type":"Assign","children":[2,3]},{"type":"NameStore","value":"x"},{"type":"Num","value":"7"}, {"type":"Print","children":[5]}, {"type":"BinOpAdd","children":[6,7]}, {"type":"NameLoad","value":"x"}, {"type":"Num","value":"1"} ]
'''
### Data Fields
- `ast`: a list of dictionaries, wherein every dictionary is a node in the Abstract Syntax Tree.
- `type`: explains the type of the node.
- `children`: list of nodes which are children under the given
- `value`: hardcoded value, if the node holds an hardcoded value.
### Data Splits
The data is split into a training and test set.
The final split sizes are as follows:
| | train | validation |
|------------------|--------:|------------:|
| py_ast examples | 100000 | 50000 |
## Dataset Creation
[More Information Needed]
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Raychev, V., Bielik, P., and Vechev, M
### Licensing Information
MIT, BSD and Apache
### Citation Information
@InProceedings{OOPSLA ’16, ACM,
title = {Probabilistic Model for Code with Decision Trees.},
authors={Raychev, V., Bielik, P., and Vechev, M.},
year={2016}
}
```
@inproceedings{10.1145/2983990.2984041,
author = {Raychev, Veselin and Bielik, Pavol and Vechev, Martin},
title = {Probabilistic Model for Code with Decision Trees},
year = {2016},
isbn = {9781450344449},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/2983990.2984041},
doi = {10.1145/2983990.2984041},
booktitle = {Proceedings of the 2016 ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications},
pages = {731–747},
numpages = {17},
keywords = {Code Completion, Decision Trees, Probabilistic Models of Code},
location = {Amsterdam, Netherlands},
series = {OOPSLA 2016}
}
```
### Contributions
Thanks to [@reshinthadithyan](https://github.com/reshinthadithyan) for adding this dataset. |
qa4mre | ---
annotations_creators:
- other
language:
- ar
- bg
- de
- en
- es
- it
- ro
language_creators:
- found
license:
- unknown
multilinguality:
- multilingual
pretty_name: 'QA4MRE: Question Answering for Machine Reading Evaluation'
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- multiple-choice
task_ids:
- multiple-choice-qa
paperswithcode_id: null
dataset_info:
- config_name: 2011.main.DE
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 1747118
num_examples: 120
download_size: 222289
dataset_size: 1747118
- config_name: 2011.main.EN
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 1569676
num_examples: 120
download_size: 202490
dataset_size: 1569676
- config_name: 2011.main.ES
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 1694460
num_examples: 120
download_size: 217617
dataset_size: 1694460
- config_name: 2011.main.IT
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 1667188
num_examples: 120
download_size: 214764
dataset_size: 1667188
- config_name: 2011.main.RO
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 1740419
num_examples: 120
download_size: 221510
dataset_size: 1740419
- config_name: 2012.main.AR
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 2710656
num_examples: 160
download_size: 356178
dataset_size: 2710656
- config_name: 2012.main.BG
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 3454215
num_examples: 160
download_size: 445060
dataset_size: 3454215
- config_name: 2012.main.DE
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 2087466
num_examples: 160
download_size: 281600
dataset_size: 2087466
- config_name: 2012.main.EN
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 1757586
num_examples: 160
download_size: 243467
dataset_size: 1757586
- config_name: 2012.main.ES
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 2057402
num_examples: 160
download_size: 278445
dataset_size: 2057402
- config_name: 2012.main.IT
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 2071710
num_examples: 160
download_size: 280051
dataset_size: 2071710
- config_name: 2012.main.RO
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 2074930
num_examples: 160
download_size: 279541
dataset_size: 2074930
- config_name: 2012.alzheimers.EN
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 1637988
num_examples: 40
download_size: 177345
dataset_size: 1637988
- config_name: 2013.main.AR
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 4180979
num_examples: 284
download_size: 378302
dataset_size: 4180979
- config_name: 2013.main.BG
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 5403246
num_examples: 284
download_size: 463605
dataset_size: 5403246
- config_name: 2013.main.EN
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 2887866
num_examples: 284
download_size: 274969
dataset_size: 2887866
- config_name: 2013.main.ES
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 3449693
num_examples: 284
download_size: 315166
dataset_size: 3449693
- config_name: 2013.main.RO
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 3363049
num_examples: 284
download_size: 313510
dataset_size: 3363049
- config_name: 2013.alzheimers.EN
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 2614812
num_examples: 40
download_size: 274413
dataset_size: 2614812
- config_name: 2013.entrance_exam.EN
features:
- name: topic_id
dtype: string
- name: topic_name
dtype: string
- name: test_id
dtype: string
- name: document_id
dtype: string
- name: document_str
dtype: string
- name: question_id
dtype: string
- name: question_str
dtype: string
- name: answer_options
sequence:
- name: answer_id
dtype: string
- name: answer_str
dtype: string
- name: correct_answer_id
dtype: string
- name: correct_answer_str
dtype: string
splits:
- name: train
num_bytes: 180827
num_examples: 46
download_size: 54598
dataset_size: 180827
---
# Dataset Card for "qa4mre"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://nlp.uned.es/clef-qa/repository/qa4mre.php
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [QA4MRE 2011-2013: Overview of Question Answering for Machine Reading Evaluation](https://link.springer.com/chapter/10.1007/978-3-642-40802-1_29)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 5.49 MB
- **Size of the generated dataset:** 48.35 MB
- **Total amount of disk used:** 53.84 MB
### Dataset Summary
QA4MRE dataset was created for the CLEF 2011/2012/2013 shared tasks to promote research in
question answering and reading comprehension. The dataset contains a supporting
passage and a set of questions corresponding to the passage. Multiple options
for answers are provided for each question, of which only one is correct. The
training and test datasets are available for the main track.
Additional gold standard documents are available for two pilot studies: one on
alzheimers data, and the other on entrance exams data.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### 2011.main.DE
- **Size of downloaded dataset files:** 0.22 MB
- **Size of the generated dataset:** 1.75 MB
- **Total amount of disk used:** 1.97 MB
An example of 'train' looks as follows.
```
```
#### 2011.main.EN
- **Size of downloaded dataset files:** 0.20 MB
- **Size of the generated dataset:** 1.57 MB
- **Total amount of disk used:** 1.77 MB
An example of 'train' looks as follows.
```
```
#### 2011.main.ES
- **Size of downloaded dataset files:** 0.22 MB
- **Size of the generated dataset:** 1.70 MB
- **Total amount of disk used:** 1.91 MB
An example of 'train' looks as follows.
```
```
#### 2011.main.IT
- **Size of downloaded dataset files:** 0.21 MB
- **Size of the generated dataset:** 1.67 MB
- **Total amount of disk used:** 1.88 MB
An example of 'train' looks as follows.
```
```
#### 2011.main.RO
- **Size of downloaded dataset files:** 0.22 MB
- **Size of the generated dataset:** 1.74 MB
- **Total amount of disk used:** 1.96 MB
An example of 'train' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### 2011.main.DE
- `topic_id`: a `string` feature.
- `topic_name`: a `string` feature.
- `test_id`: a `string` feature.
- `document_id`: a `string` feature.
- `document_str`: a `string` feature.
- `question_id`: a `string` feature.
- `question_str`: a `string` feature.
- `answer_options`: a dictionary feature containing:
- `answer_id`: a `string` feature.
- `answer_str`: a `string` feature.
- `correct_answer_id`: a `string` feature.
- `correct_answer_str`: a `string` feature.
#### 2011.main.EN
- `topic_id`: a `string` feature.
- `topic_name`: a `string` feature.
- `test_id`: a `string` feature.
- `document_id`: a `string` feature.
- `document_str`: a `string` feature.
- `question_id`: a `string` feature.
- `question_str`: a `string` feature.
- `answer_options`: a dictionary feature containing:
- `answer_id`: a `string` feature.
- `answer_str`: a `string` feature.
- `correct_answer_id`: a `string` feature.
- `correct_answer_str`: a `string` feature.
#### 2011.main.ES
- `topic_id`: a `string` feature.
- `topic_name`: a `string` feature.
- `test_id`: a `string` feature.
- `document_id`: a `string` feature.
- `document_str`: a `string` feature.
- `question_id`: a `string` feature.
- `question_str`: a `string` feature.
- `answer_options`: a dictionary feature containing:
- `answer_id`: a `string` feature.
- `answer_str`: a `string` feature.
- `correct_answer_id`: a `string` feature.
- `correct_answer_str`: a `string` feature.
#### 2011.main.IT
- `topic_id`: a `string` feature.
- `topic_name`: a `string` feature.
- `test_id`: a `string` feature.
- `document_id`: a `string` feature.
- `document_str`: a `string` feature.
- `question_id`: a `string` feature.
- `question_str`: a `string` feature.
- `answer_options`: a dictionary feature containing:
- `answer_id`: a `string` feature.
- `answer_str`: a `string` feature.
- `correct_answer_id`: a `string` feature.
- `correct_answer_str`: a `string` feature.
#### 2011.main.RO
- `topic_id`: a `string` feature.
- `topic_name`: a `string` feature.
- `test_id`: a `string` feature.
- `document_id`: a `string` feature.
- `document_str`: a `string` feature.
- `question_id`: a `string` feature.
- `question_str`: a `string` feature.
- `answer_options`: a dictionary feature containing:
- `answer_id`: a `string` feature.
- `answer_str`: a `string` feature.
- `correct_answer_id`: a `string` feature.
- `correct_answer_str`: a `string` feature.
### Data Splits
| name |train|
|------------|----:|
|2011.main.DE| 120|
|2011.main.EN| 120|
|2011.main.ES| 120|
|2011.main.IT| 120|
|2011.main.RO| 120|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{10.1007/978-3-642-40802-1_29,
author="Pe{\~{n}}as, Anselmo
and Hovy, Eduard
and Forner, Pamela
and Rodrigo, {\'A}lvaro
and Sutcliffe, Richard
and Morante, Roser",
editor="Forner, Pamela
and M{\"u}ller, Henning
and Paredes, Roberto
and Rosso, Paolo
and Stein, Benno",
title="QA4MRE 2011-2013: Overview of Question Answering for Machine Reading Evaluation",
booktitle="Information Access Evaluation. Multilinguality, Multimodality, and Visualization",
year="2013",
publisher="Springer Berlin Heidelberg",
address="Berlin, Heidelberg",
pages="303--320",
isbn="978-3-642-40802-1"
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@albertvillanova](https://github.com/albertvillanova), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
qa_srl | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
- open-domain-qa
paperswithcode_id: qa-srl
pretty_name: QA-SRL
dataset_info:
features:
- name: sentence
dtype: string
- name: sent_id
dtype: string
- name: predicate_idx
dtype: int32
- name: predicate
dtype: string
- name: question
sequence: string
- name: answers
sequence: string
config_name: plain_text
splits:
- name: train
num_bytes: 1835549
num_examples: 6414
- name: validation
num_bytes: 632992
num_examples: 2183
- name: test
num_bytes: 637317
num_examples: 2201
download_size: 1087729
dataset_size: 3105858
---
# Dataset Card for QA-SRL
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Homepage](https://dada.cs.washington.edu/qasrl/#page-top)
- **Annotation Tool:** [Annotation tool](https://github.com/luheng/qasrl_annotation)
- **Repository:** [Repository](https://dada.cs.washington.edu/qasrl/#dataset)
- **Paper:** [Qa_srl paper](https://www.aclweb.org/anthology/D15-1076.pdf)
- **Point of Contact:** [Luheng He](luheng@cs.washington.edu)
### Dataset Summary
we model predicate-argument structure of a sentence with a set of question-answer pairs. our method allows practical large-scale annotation of training data. We focus on semantic rather than syntactic annotation, and introduce a scalable method for gathering data that allows both training and evaluation.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
This dataset is in english language.
## Dataset Structure
### Data Instances
We use question-answer pairs to model verbal predicate-argument structure. The questions start with wh-words (Who, What, Where, What, etc.) and contains a verb predicate in the sentence; the answers are phrases in the sentence. For example:
`UCD finished the 2006 championship as Dublin champions , by beating St Vincents in the final .`
Predicate | Question | Answer
---|---|---|
|Finished|Who finished something? | UCD
|Finished|What did someone finish?|the 2006 championship
|Finished|What did someone finish something as? |Dublin champions
|Finished|How did someone finish something? |by beating St Vincents in the final
|beating | Who beat someone? | UCD
|beating|When did someone beat someone? |in the final
|beating|Who did someone beat?| St Vincents
### Data Fields
Annotations provided are as follows:
- `sentence`: contains tokenized sentence
- `sent_id`: is the sentence identifier
- `predicate_idx`:the index of the predicate (its position in the sentence)
- `predicate`: the predicate token
- `question`: contains the question which is a list of tokens. The question always consists of seven slots, as defined in the paper. The empty slots are represented with a marker “_”. The question ends with question mark.
- `answer`: list of answers to the question
### Data Splits
Dataset | Sentences | Verbs | QAs
--- | --- | --- |---|
**newswire-train**|744|2020|4904|
**newswire-dev**|249|664|1606|
**newswire-test**|248|652|1599
**Wikipedia-train**|`1174`|`2647`|`6414`|
**Wikipedia-dev**|`392`|`895`|`2183`|
**Wikipedia-test**|`393`|`898`|`2201`|
**Please note**
This dataset only has wikipedia data. Newswire dataset needs CoNLL-2009 English training data to get the complete data. This training data is under license. Thus, newswire dataset is not included in this data.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
We annotated over 3000 sentences (nearly 8,000 verbs) in total across two domains: newswire (PropBank) and Wikipedia.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
non-expert annotators were given a short tutorial and a small set of sample annotations (about 10 sentences). Annotators were hired if they showed good understanding of English and the task. The entire screening process usually took less than 2 hours.
#### Who are the annotators?
10 part-time, non-exper annotators from Upwork (Previously oDesk)
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[Luheng He](luheng@cs.washington.edu)
### Licensing Information
[More Information Needed]
### Citation Information
```
@InProceedings{huggingface:dataset,
title = {QA-SRL: Question-Answer Driven Semantic Role Labeling},
authors={Luheng He, Mike Lewis, Luke Zettlemoyer},
year={2015}
publisher = {cs.washington.edu},
howpublished={\\url{https://dada.cs.washington.edu/qasrl/#page-top}},
}
```
### Contributions
Thanks to [@bpatidar](https://github.com/bpatidar) for adding this dataset. |
qa_zre | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- en
license:
- unknown
multilinguality:
- monolingual
pretty_name: QaZre
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- question-answering
task_ids: []
paperswithcode_id: null
tags:
- zero-shot-relation-extraction
dataset_info:
features:
- name: relation
dtype: string
- name: question
dtype: string
- name: subject
dtype: string
- name: context
dtype: string
- name: answers
sequence: string
splits:
- name: test
num_bytes: 29410194
num_examples: 120000
- name: validation
num_bytes: 1481430
num_examples: 6000
- name: train
num_bytes: 2054954011
num_examples: 8400000
download_size: 516061636
dataset_size: 2085845635
---
# Dataset Card for QaZre
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://nlp.cs.washington.edu/zeroshot](http://nlp.cs.washington.edu/zeroshot)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 516.06 MB
- **Size of the generated dataset:** 2.09 GB
- **Total amount of disk used:** 2.60 GB
### Dataset Summary
A dataset reducing relation extraction to simple reading comprehension questions
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 516.06 MB
- **Size of the generated dataset:** 2.09 GB
- **Total amount of disk used:** 2.60 GB
An example of 'validation' looks as follows.
```
{
"answers": [],
"context": "answer",
"question": "What is XXX in this question?",
"relation": "relation_name",
"subject": "Some entity Here is a bit of context which will explain the question in some way"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `relation`: a `string` feature.
- `question`: a `string` feature.
- `subject`: a `string` feature.
- `context`: a `string` feature.
- `answers`: a `list` of `string` features.
### Data Splits
| name | train | validation | test |
|---------|--------:|-----------:|-------:|
| default | 8400000 | 6000 | 120000 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
Unknown.
### Citation Information
```
@inproceedings{levy-etal-2017-zero,
title = "Zero-Shot Relation Extraction via Reading Comprehension",
author = "Levy, Omer and
Seo, Minjoon and
Choi, Eunsol and
Zettlemoyer, Luke",
booktitle = "Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017)",
month = aug,
year = "2017",
address = "Vancouver, Canada",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/K17-1034",
doi = "10.18653/v1/K17-1034",
pages = "333--342",
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@ghomasHudson](https://github.com/ghomasHudson), [@lewtun](https://github.com/lewtun) for adding this dataset. |
qangaroo | ---
language:
- en
paperswithcode_id: null
pretty_name: qangaroo
dataset_info:
- config_name: medhop
features:
- name: query
dtype: string
- name: supports
sequence: string
- name: candidates
sequence: string
- name: answer
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 93947725
num_examples: 1620
- name: validation
num_bytes: 16463555
num_examples: 342
download_size: 339843061
dataset_size: 110411280
- config_name: masked_medhop
features:
- name: query
dtype: string
- name: supports
sequence: string
- name: candidates
sequence: string
- name: answer
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 95823986
num_examples: 1620
- name: validation
num_bytes: 16802484
num_examples: 342
download_size: 339843061
dataset_size: 112626470
- config_name: wikihop
features:
- name: query
dtype: string
- name: supports
sequence: string
- name: candidates
sequence: string
- name: answer
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 325994029
num_examples: 43738
- name: validation
num_bytes: 40869634
num_examples: 5129
download_size: 339843061
dataset_size: 366863663
- config_name: masked_wikihop
features:
- name: query
dtype: string
- name: supports
sequence: string
- name: candidates
sequence: string
- name: answer
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 348290479
num_examples: 43738
- name: validation
num_bytes: 43689810
num_examples: 5129
download_size: 339843061
dataset_size: 391980289
---
# Dataset Card for "qangaroo"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://qangaroo.cs.ucl.ac.uk/index.html](http://qangaroo.cs.ucl.ac.uk/index.html)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 1.36 GB
- **Size of the generated dataset:** 981.89 MB
- **Total amount of disk used:** 2.34 GB
### Dataset Summary
We have created two new Reading Comprehension datasets focussing on multi-hop (alias multi-step) inference.
Several pieces of information often jointly imply another fact. In multi-hop inference, a new fact is derived by combining facts via a chain of multiple steps.
Our aim is to build Reading Comprehension methods that perform multi-hop inference on text, where individual facts are spread out across different documents.
The two QAngaroo datasets provide a training and evaluation resource for such methods.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### masked_medhop
- **Size of downloaded dataset files:** 339.84 MB
- **Size of the generated dataset:** 112.63 MB
- **Total amount of disk used:** 452.47 MB
An example of 'validation' looks as follows.
```
```
#### masked_wikihop
- **Size of downloaded dataset files:** 339.84 MB
- **Size of the generated dataset:** 391.98 MB
- **Total amount of disk used:** 731.82 MB
An example of 'validation' looks as follows.
```
```
#### medhop
- **Size of downloaded dataset files:** 339.84 MB
- **Size of the generated dataset:** 110.42 MB
- **Total amount of disk used:** 450.26 MB
An example of 'validation' looks as follows.
```
```
#### wikihop
- **Size of downloaded dataset files:** 339.84 MB
- **Size of the generated dataset:** 366.87 MB
- **Total amount of disk used:** 706.71 MB
An example of 'validation' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### masked_medhop
- `query`: a `string` feature.
- `supports`: a `list` of `string` features.
- `candidates`: a `list` of `string` features.
- `answer`: a `string` feature.
- `id`: a `string` feature.
#### masked_wikihop
- `query`: a `string` feature.
- `supports`: a `list` of `string` features.
- `candidates`: a `list` of `string` features.
- `answer`: a `string` feature.
- `id`: a `string` feature.
#### medhop
- `query`: a `string` feature.
- `supports`: a `list` of `string` features.
- `candidates`: a `list` of `string` features.
- `answer`: a `string` feature.
- `id`: a `string` feature.
#### wikihop
- `query`: a `string` feature.
- `supports`: a `list` of `string` features.
- `candidates`: a `list` of `string` features.
- `answer`: a `string` feature.
- `id`: a `string` feature.
### Data Splits
| name |train|validation|
|--------------|----:|---------:|
|masked_medhop | 1620| 342|
|masked_wikihop|43738| 5129|
|medhop | 1620| 342|
|wikihop |43738| 5129|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@jplu](https://github.com/jplu), [@lewtun](https://github.com/lewtun), [@lhoestq](https://github.com/lhoestq), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset. |
qanta | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- found
license:
- unknown
multilinguality:
- monolingual
pretty_name: Quizbowl
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- question-answering
task_ids: []
paperswithcode_id: quizbowl
tags:
- quizbowl
dataset_info:
features:
- name: id
dtype: string
- name: qanta_id
dtype: int32
- name: proto_id
dtype: string
- name: qdb_id
dtype: int32
- name: dataset
dtype: string
- name: text
dtype: string
- name: full_question
dtype: string
- name: first_sentence
dtype: string
- name: char_idx
dtype: int32
- name: sentence_idx
dtype: int32
- name: tokenizations
sequence:
sequence: int32
length: 2
- name: answer
dtype: string
- name: page
dtype: string
- name: raw_answer
dtype: string
- name: fold
dtype: string
- name: gameplay
dtype: bool
- name: category
dtype: string
- name: subcategory
dtype: string
- name: tournament
dtype: string
- name: difficulty
dtype: string
- name: year
dtype: int32
config_name: mode=first,char_skip=25
splits:
- name: adversarial
num_bytes: 1258844
num_examples: 1145
- name: buzzdev
num_bytes: 1553636
num_examples: 1161
- name: buzztest
num_bytes: 2653425
num_examples: 1953
- name: buzztrain
num_bytes: 19699736
num_examples: 16706
- name: guessdev
num_bytes: 1414882
num_examples: 1055
- name: guesstest
num_bytes: 2997123
num_examples: 2151
- name: guesstrain
num_bytes: 117599750
num_examples: 96221
download_size: 170754918
dataset_size: 147177396
---
# Dataset Card for "qanta"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://www.qanta.org/](http://www.qanta.org/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [Quizbowl: The Case for Incremental Question Answering](https://arxiv.org/abs/1904.04792)
- **Point of Contact:** [Jordan Boyd-Graber](mailto:jbg@umiacs.umd.edu)
- **Size of downloaded dataset files:** 170.75 MB
- **Size of the generated dataset:** 147.18 MB
- **Total amount of disk used:** 317.93 MB
### Dataset Summary
The Qanta dataset is a question answering dataset based on the academic trivia game Quizbowl.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### mode=first,char_skip=25
- **Size of downloaded dataset files:** 170.75 MB
- **Size of the generated dataset:** 147.18 MB
- **Total amount of disk used:** 317.93 MB
An example of 'guessdev' looks as follows.
```
This example was too long and was cropped:
{
"answer": "Apollo_program",
"category": "History",
"char_idx": -1,
"dataset": "quizdb.org",
"difficulty": "easy_college",
"first_sentence": "As part of this program, William Anders took a photo that Galen Rowell called \"the most influential environmental photograph ever taken.\"",
"fold": "guessdev",
"full_question": "\"As part of this program, William Anders took a photo that Galen Rowell called \\\"the most influential environmental photograph e...",
"gameplay": false,
"id": "127028-first",
"page": "Apollo_program",
"proto_id": "",
"qanta_id": 127028,
"qdb_id": 126689,
"raw_answer": "Apollo program [or Project Apollo; accept Apollo 8; accept Apollo 1; accept Apollo 11; prompt on landing on the moon]",
"sentence_idx": -1,
"subcategory": "American",
"text": "As part of this program, William Anders took a photo that Galen Rowell called \"the most influential environmental photograph ever taken.\"",
"tokenizations": [[0, 137], [138, 281], [282, 412], [413, 592], [593, 675]],
"tournament": "ACF Fall",
"year": 2016
}
```
### Data Fields
The data fields are the same among all splits.
#### mode=first,char_skip=25
- `id`: a `string` feature.
- `qanta_id`: a `int32` feature.
- `proto_id`: a `string` feature.
- `qdb_id`: a `int32` feature.
- `dataset`: a `string` feature.
- `text`: a `string` feature.
- `full_question`: a `string` feature.
- `first_sentence`: a `string` feature.
- `char_idx`: a `int32` feature.
- `sentence_idx`: a `int32` feature.
- `tokenizations`: a dictionary feature containing:
- `feature`: a `int32` feature.
- `answer`: a `string` feature.
- `page`: a `string` feature.
- `raw_answer`: a `string` feature.
- `fold`: a `string` feature.
- `gameplay`: a `bool` feature.
- `category`: a `string` feature.
- `subcategory`: a `string` feature.
- `tournament`: a `string` feature.
- `difficulty`: a `string` feature.
- `year`: a `int32` feature.
### Data Splits
| name |adversarial|buzzdev|buzztrain|guessdev|guesstrain|buzztest|guesstest|
|-----------------------|----------:|------:|--------:|-------:|---------:|-------:|--------:|
|mode=first,char_skip=25| 1145| 1161| 16706| 1055| 96221| 1953| 2151|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{Rodriguez2019QuizbowlTC,
title={Quizbowl: The Case for Incremental Question Answering},
author={Pedro Rodriguez and Shi Feng and Mohit Iyyer and He He and Jordan L. Boyd-Graber},
journal={ArXiv},
year={2019},
volume={abs/1904.04792}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset. |
qasc | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Question Answering via Sentence Composition (QASC)
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
- multiple-choice
task_ids:
- extractive-qa
- multiple-choice-qa
paperswithcode_id: qasc
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: answerKey
dtype: string
- name: fact1
dtype: string
- name: fact2
dtype: string
- name: combinedfact
dtype: string
- name: formatted_question
dtype: string
splits:
- name: test
num_bytes: 393683
num_examples: 920
- name: train
num_bytes: 4919377
num_examples: 8134
- name: validation
num_bytes: 562352
num_examples: 926
download_size: 1616514
dataset_size: 5875412
---
# Dataset Card for "qasc"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://allenai.org/data/qasc](https://allenai.org/data/qasc)
- **Repository:** https://github.com/allenai/qasc/
- **Paper:** [QASC: A Dataset for Question Answering via Sentence Composition](https://arxiv.org/abs/1910.11473)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 1.61 MB
- **Size of the generated dataset:** 5.87 MB
- **Total amount of disk used:** 7.49 MB
### Dataset Summary
QASC is a question-answering dataset with a focus on sentence composition. It consists of 9,980 8-way multiple-choice
questions about grade school science (8,134 train, 926 dev, 920 test), and comes with a corpus of 17M sentences.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 1.61 MB
- **Size of the generated dataset:** 5.87 MB
- **Total amount of disk used:** 7.49 MB
An example of 'validation' looks as follows.
```
{
"answerKey": "F",
"choices": {
"label": ["A", "B", "C", "D", "E", "F", "G", "H"],
"text": ["sand", "occurs over a wide range", "forests", "Global warming", "rapid changes occur", "local weather conditions", "measure of motion", "city life"]
},
"combinedfact": "Climate is generally described in terms of local weather conditions",
"fact1": "Climate is generally described in terms of temperature and moisture.",
"fact2": "Fire behavior is driven by local weather conditions such as winds, temperature and moisture.",
"formatted_question": "Climate is generally described in terms of what? (A) sand (B) occurs over a wide range (C) forests (D) Global warming (E) rapid changes occur (F) local weather conditions (G) measure of motion (H) city life",
"id": "3NGI5ARFTT4HNGVWXAMLNBMFA0U1PG",
"question": "Climate is generally described in terms of what?"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `id`: a `string` feature.
- `question`: a `string` feature.
- `choices`: a dictionary feature containing:
- `text`: a `string` feature.
- `label`: a `string` feature.
- `answerKey`: a `string` feature.
- `fact1`: a `string` feature.
- `fact2`: a `string` feature.
- `combinedfact`: a `string` feature.
- `formatted_question`: a `string` feature.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default| 8134| 926| 920|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The dataset is released under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license.
### Citation Information
```
@article{allenai:qasc,
author = {Tushar Khot and Peter Clark and Michal Guerquin and Peter Jansen and Ashish Sabharwal},
title = {QASC: A Dataset for Question Answering via Sentence Composition},
journal = {arXiv:1910.11473v2},
year = {2020},
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset. |
allenai/qasper | ---
pretty_name: QASPER
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
language_bcp47:
- en-US
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|s2orc
task_categories:
- question-answering
task_ids:
- closed-domain-qa
paperswithcode_id: qasper
---
# Dataset Card for Qasper
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://allenai.org/data/qasper](https://allenai.org/data/qasper)
- **Demo:** [https://qasper-demo.apps.allenai.org/](https://qasper-demo.apps.allenai.org/)
- **Paper:** [https://arxiv.org/abs/2105.03011](https://arxiv.org/abs/2105.03011)
- **Blogpost:** [https://medium.com/ai2-blog/question-answering-on-scientific-research-papers-f6d6da9fd55c](https://medium.com/ai2-blog/question-answering-on-scientific-research-papers-f6d6da9fd55c)
- **Leaderboards:** [https://paperswithcode.com/dataset/qasper](https://paperswithcode.com/dataset/qasper)
### Dataset Summary
QASPER is a dataset for question answering on scientific research papers. It consists of 5,049 questions over 1,585 Natural Language Processing papers. Each question is written by an NLP practitioner who read only the title and abstract of the corresponding paper, and the question seeks information present in the full text. The questions are then answered by a separate set of NLP practitioners who also provide supporting evidence to answers.
### Supported Tasks and Leaderboards
- `question-answering`: The dataset can be used to train a model for Question Answering. Success on this task is typically measured by achieving a *high* [F1 score](https://huggingface.co/metrics/f1). The [official baseline model](https://github.com/allenai/qasper-led-baseline) currently achieves 33.63 Token F1 score & uses [Longformer](https://huggingface.co/transformers/model_doc/longformer.html). This task has an active leaderboard which can be found [here](https://paperswithcode.com/sota/question-answering-on-qasper)
- `evidence-selection`: The dataset can be used to train a model for Evidence Selection. Success on this task is typically measured by achieving a *high* [F1 score](https://huggingface.co/metrics/f1). The [official baseline model](https://github.com/allenai/qasper-led-baseline) currently achieves 39.37 F1 score & uses [Longformer](https://huggingface.co/transformers/model_doc/longformer.html). This task has an active leaderboard which can be found [here](https://paperswithcode.com/sota/evidence-selection-on-qasper)
### Languages
English, as it is used in research papers.
## Dataset Structure
### Data Instances
A typical instance in the dataset:
```
{
'id': "Paper ID (string)",
'title': "Paper Title",
'abstract': "paper abstract ...",
'full_text': {
'paragraphs':[["section1_paragraph1_text","section1_paragraph2_text",...],["section2_paragraph1_text","section2_paragraph2_text",...]],
'section_name':["section1_title","section2_title"],...},
'qas': {
'answers':[{
'annotation_id': ["q1_answer1_annotation_id","q1_answer2_annotation_id"]
'answer': [{
'unanswerable':False,
'extractive_spans':["q1_answer1_extractive_span1","q1_answer1_extractive_span2"],
'yes_no':False,
'free_form_answer':"q1_answer1",
'evidence':["q1_answer1_evidence1","q1_answer1_evidence2",..],
'highlighted_evidence':["q1_answer1_highlighted_evidence1","q1_answer1_highlighted_evidence2",..]
},
{
'unanswerable':False,
'extractive_spans':["q1_answer2_extractive_span1","q1_answer2_extractive_span2"],
'yes_no':False,
'free_form_answer':"q1_answer2",
'evidence':["q1_answer2_evidence1","q1_answer2_evidence2",..],
'highlighted_evidence':["q1_answer2_highlighted_evidence1","q1_answer2_highlighted_evidence2",..]
}],
'worker_id':["q1_answer1_worker_id","q1_answer2_worker_id"]
},{...["question2's answers"]..},{...["question3's answers"]..}],
'question':["question1","question2","question3"...],
'question_id':["question1_id","question2_id","question3_id"...],
'question_writer':["question1_writer_id","question2_writer_id","question3_writer_id"...],
'nlp_background':["question1_writer_nlp_background","question2_writer_nlp_background",...],
'topic_background':["question1_writer_topic_background","question2_writer_topic_background",...],
'paper_read': ["question1_writer_paper_read_status","question2_writer_paper_read_status",...],
'search_query':["question1_search_query","question2_search_query","question3_search_query"...],
}
}
```
### Data Fields
The following is an excerpt from the dataset README:
Within "qas", some fields should be obvious. Here is some explanation about the others:
#### Fields specific to questions:
- "nlp_background" shows the experience the question writer had. The values can be "zero" (no experience), "two" (0 - 2 years of experience), "five" (2 - 5 years of experience), and "infinity" (> 5 years of experience). The field may be empty as well, indicating the writer has chosen not to share this information.
- "topic_background" shows how familiar the question writer was with the topic of the paper. The values are "unfamiliar", "familiar", "research" (meaning that the topic is the research area of the writer), or null.
- "paper_read", when specified shows whether the questionwriter has read the paper.
- "search_query", if not empty, is the query the question writer used to find the abstract of the paper from a large pool of abstracts we made available to them.
#### Fields specific to answers
Unanswerable answers have "unanswerable" set to true. The remaining answers have exactly one of the following fields being non-empty.
- "extractive_spans" are spans in the paper which serve as the answer.
- "free_form_answer" is a written out answer.
- "yes_no" is true iff the answer is Yes, and false iff the answer is No.
"evidence" is the set of paragraphs, figures or tables used to arrive at the answer. Tables or figures start with the string "FLOAT SELECTED"
"highlighted_evidence" is the set of sentences the answer providers selected as evidence if they chose textual evidence. The text in the "evidence" field is a mapping from these sentences to the paragraph level. That is, if you see textual evidence in the "evidence" field, it is guaranteed to be entire paragraphs, while that is not the case with "highlighted_evidence".
### Data Splits
| | Train | Valid |
| ----- | ------ | ----- |
| Number of papers | 888 | 281 |
| Number of questions | 2593 | 1005 |
| Number of answers | 2675 | 1764 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
NLP papers: The full text of the papers is extracted from [S2ORC](https://huggingface.co/datasets/s2orc) (Lo et al., 2020)
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
"The annotators are NLP practitioners, not
expert researchers, and it is likely that an expert
would score higher"
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Crowdsourced NLP practitioners
### Licensing Information
[CC BY 4.0](https://creativecommons.org/licenses/by/4.0)
### Citation Information
```
@inproceedings{Dasigi2021ADO,
title={A Dataset of Information-Seeking Questions and Answers Anchored in Research Papers},
author={Pradeep Dasigi and Kyle Lo and Iz Beltagy and Arman Cohan and Noah A. Smith and Matt Gardner},
year={2021}
}
```
### Contributions
Thanks to [@cceyda](https://github.com/cceyda) for adding this dataset.
|
qed | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|natural_questions
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: qed
pretty_name: QED
tags:
- explanations-in-question-answering
dataset_info:
features:
- name: example_id
dtype: int64
- name: title_text
dtype: string
- name: url
dtype: string
- name: question
dtype: string
- name: paragraph_text
dtype: string
- name: sentence_starts
sequence: int32
- name: original_nq_answers
list:
- name: start
dtype: int32
- name: end
dtype: int32
- name: string
dtype: string
- name: annotation
struct:
- name: referential_equalities
list:
- name: question_reference
struct:
- name: start
dtype: int32
- name: end
dtype: int32
- name: string
dtype: string
- name: sentence_reference
struct:
- name: start
dtype: int32
- name: end
dtype: int32
- name: bridge
dtype: string
- name: string
dtype: string
- name: answer
list:
- name: sentence_reference
struct:
- name: start
dtype: int32
- name: end
dtype: int32
- name: bridge
dtype: string
- name: string
dtype: string
- name: paragraph_reference
struct:
- name: start
dtype: int32
- name: end
dtype: int32
- name: string
dtype: string
- name: explanation_type
dtype: string
- name: selected_sentence
struct:
- name: start
dtype: int32
- name: end
dtype: int32
- name: string
dtype: string
config_name: qed
splits:
- name: train
num_bytes: 8602094
num_examples: 7638
- name: validation
num_bytes: 1584139
num_examples: 1355
download_size: 14083968
dataset_size: 10186233
---
# Dataset Card for QED
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** N/A
- **Repository:** [GitHub](https://github.com/google-research-datasets/QED)
- **Paper:** [QED: A Framework and Dataset for Explanations in Question Answering](https://arxiv.org/abs/2009.06354)
- **Leaderboard:** N/A
- **Point of Contact:** -
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset. |
qed_amara | ---
annotations_creators:
- found
language_creators:
- found
language:
- aa
- ab
- ae
- aeb
- af
- ak
- am
- an
- ar
- arq
- arz
- as
- ase
- ast
- av
- ay
- az
- ba
- be
- ber
- bg
- bh
- bi
- bm
- bn
- bnt
- bo
- br
- bs
- bug
- ca
- ce
- ceb
- ch
- cho
- cku
- cnh
- co
- cr
- cs
- cu
- cv
- cy
- da
- de
- dv
- dz
- ee
- efi
- el
- en
- eo
- es
- et
- eu
- fa
- ff
- fi
- fil
- fj
- fo
- fr
- ga
- gd
- gl
- gn
- gu
- ha
- hai
- haw
- haz
- hch
- he
- hi
- ho
- hr
- ht
- hu
- hup
- hus
- hy
- hz
- ia
- id
- ie
- ig
- ik
- inh
- io
- iro
- is
- it
- iu
- ja
- jv
- ka
- kar
- ki
- kj
- kk
- kl
- km
- kn
- ko
- kr
- ksh
- ku
- kv
- kw
- ky
- la
- lb
- lg
- li
- lkt
- lld
- ln
- lo
- lt
- ltg
- lu
- luo
- luy
- lv
- mad
- mfe
- mg
- mi
- mk
- ml
- mn
- mni
- moh
- mos
- mr
- ms
- mt
- mus
- my
- nb
- nci
- nd
- ne
- nl
- nn
- nso
- nv
- ny
- oc
- om
- or
- pa
- pam
- pap
- pi
- pl
- pnb
- prs
- ps
- pt
- qu
- rm
- rn
- ro
- ru
- rup
- rw
- sa
- sc
- scn
- sco
- sd
- sg
- sgn
- sh
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- sv
- sw
- szl
- ta
- te
- tet
- tg
- th
- ti
- tk
- tl
- tlh
- to
- tr
- ts
- tt
- tw
- ug
- uk
- umb
- ur
- uz
- ve
- vi
- vls
- vo
- wa
- wo
- xh
- yaq
- yi
- yo
- za
- zam
- zh
- zu
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: QedAmara
dataset_info:
- config_name: ar-ko
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ar
- ko
splits:
- name: train
num_bytes: 79605277
num_examples: 592589
download_size: 23410393
dataset_size: 79605277
- config_name: de-fr
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- fr
splits:
- name: train
num_bytes: 75861416
num_examples: 407224
download_size: 26579871
dataset_size: 75861416
- config_name: es-it
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- es
- it
splits:
- name: train
num_bytes: 80650321
num_examples: 447369
download_size: 28344317
dataset_size: 80650321
- config_name: en-ja
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- ja
splits:
- name: train
num_bytes: 86731218
num_examples: 497531
download_size: 29836171
dataset_size: 86731218
- config_name: he-nl
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- he
- nl
splits:
- name: train
num_bytes: 51448732
num_examples: 273165
download_size: 16642865
dataset_size: 51448732
---
# Dataset Card for QedAmara
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/QED.php
- **Repository:** None
- **Paper:** https://www.aclweb.org/anthology/L14-1675/
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.
You can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/QED.php
E.g.
`dataset = load_dataset("qed_amara", lang1="cs", lang2="nb")`
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The languages in the dataset are:
- aa
- ab
- ae
- aeb
- af
- aka: `ak`
- amh: `am`
- an
- ar
- arq
- arz
- as
- ase
- ast
- av
- ay
- az
- ba
- bam: `bm`
- be
- ber
- bg
- bh
- bi
- bn
- bnt
- bo
- br
- bs
- bug
- ca
- ce
- ceb
- ch
- cho
- cku
- cnh
- co
- cr
- cs
- cu
- cv
- cy
- da
- de
- dv
- dz
- ee
- efi
- el
- en
- eo
- es
- et
- eu
- fa
- ff
- fi
- fil
- fj
- fo
- fr
- ful: `ff`
- ga
- gd
- gl
- gn
- gu
- hai
- hau: `ha`
- haw
- haz
- hb: ?
- hch
- he
- hi
- ho
- hr
- ht
- hu
- hup
- hus
- hy
- hz
- ia
- ibo: `ig`
- id
- ie
- ik
- inh
- io
- iro
- is
- it
- iu
- ja
- jv
- ka
- kar
- kau: `kr`
- kik: `ki`
- kin: `rw`
- kj
- kk
- kl
- km
- kn
- ko
- ksh
- ku
- kv
- kw
- ky
- la
- lb
- lg
- li
- lin: `ln`
- lkt
- lld
- lo
- lt
- ltg
- lu
- luo
- luy
- lv
- mad
- mfe
- mi
- mk
- ml
- mlg: `mg`
- mn
- mni
- mo: Moldavian (deprecated tag; preferred value: Romanian; Moldavian; Moldovan (`ro`))
- moh
- mos
- mr
- ms
- mt
- mus
- my
- nb
- nci
- nd
- ne
- nl
- nn
- nso
- nv
- nya: `ny`
- oc
- or
- orm: `om`
- pam
- pan: `pa`
- pap
- pi
- pl
- pnb
- prs
- ps
- pt
- que: `qu`
- rm
- ro
- ru
- run: `rn`
- rup
- ry: ?
- sa
- sc
- scn
- sco
- sd
- sg
- sgn
- sh
- si
- sk
- sl
- sm
- sna: `sn`
- som: `so`
- sot: `st`
- sq
- sr
- srp: `sr`
- sv
- swa: `sw`
- szl
- ta
- te
- tet
- tg
- th
- tir: `ti`
- tk
- tl
- tlh
- to
- tr
- ts
- tt
- tw
- ug
- uk
- umb
- ur
- uz
- ve
- vi
- vls
- vo
- wa
- wol: `wo`
- xh
- yaq
- yi
- yor: `yo`
- za
- zam
- zh
- zul: `zu`
## Dataset Structure
### Data Instances
Here are some examples of questions and facts:
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. |
quac | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|wikipedia
task_categories:
- question-answering
- text-generation
- fill-mask
task_ids:
- dialogue-modeling
- extractive-qa
paperswithcode_id: quac
pretty_name: Question Answering in Context
dataset_info:
features:
- name: dialogue_id
dtype: string
- name: wikipedia_page_title
dtype: string
- name: background
dtype: string
- name: section_title
dtype: string
- name: context
dtype: string
- name: turn_ids
sequence: string
- name: questions
sequence: string
- name: followups
sequence:
class_label:
names:
'0': y
'1': n
'2': m
- name: yesnos
sequence:
class_label:
names:
'0': y
'1': n
'2': x
- name: answers
sequence:
- name: texts
sequence: string
- name: answer_starts
sequence: int32
- name: orig_answers
struct:
- name: texts
sequence: string
- name: answer_starts
sequence: int32
config_name: plain_text
splits:
- name: train
num_bytes: 58174754
num_examples: 11567
- name: validation
num_bytes: 7375938
num_examples: 1000
download_size: 77043986
dataset_size: 65550692
---
# Dataset Card for Question Answering in Context
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [QuAC](https://quac.ai/)
- **Paper:** [QuAC: Question Answering in Context](https://arxiv.org/abs/1808.07036)
- **Leaderboard:** [QuAC's leaderboard](https://quac.ai/)
- **Point of Contact:** [Google group](https://groups.google.com/forum/#!forum/quac_ai)
### Dataset Summary
Question Answering in Context is a dataset for modeling, understanding, and participating in information seeking dialog. Data instances consist of an interactive dialog between two crowd workers: (1) a student who poses a sequence of freeform questions to learn as much as possible about a hidden Wikipedia text, and (2) a teacher who answers the questions by providing short excerpts (spans) from the text. QuAC introduces challenges not found in existing machine comprehension datasets: its questions are often more open-ended, unanswerable, or only meaningful within the dialog context.
### Supported Tasks and Leaderboards
The core problem involves predicting a text span to answer a question about a Wikipedia section (extractive question answering). Since QuAC questions include a dialog component, each instance includes a “dialog history” of questions and answers asked in the dialog prior to the given question, along with some additional metadata.
Authors provided [an official evaluation script](https://s3.amazonaws.com/my89public/quac/scorer.py) for evaluation.
### Languages
The text in the dataset is in English. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
A validation examples looks like this (one entry per dialogue):
```
{
'dialogue_id': 'C_6abd2040a75d47168a9e4cca9ca3fed5_0',
'wikipedia_page_title': 'Satchel Paige',
'background': 'Leroy Robert "Satchel" Paige (July 7, 1906 - June 8, 1982) was an American Negro league baseball and Major League Baseball (MLB) pitcher who became a legend in his own lifetime by being known as perhaps the best pitcher in baseball history, by his longevity in the game, and by attracting record crowds wherever he pitched. Paige was a right-handed pitcher, and at age 42 in 1948, he was the oldest major league rookie while playing for the Cleveland Indians. He played with the St. Louis Browns until age 47, and represented them in the All-Star Game in 1952 and 1953.',
'section_title': 'Chattanooga and Birmingham: 1926-29',
'context': 'A former friend from the Mobile slums, Alex Herman, was the player/manager for the Chattanooga White Sox of the minor Negro Southern League. In 1926 he discovered Paige and offered to pay him $250 per month, of which Paige would collect $50 with the rest going to his mother. He also agreed to pay Lula Paige a $200 advance, and she agreed to the contract. The local newspapers--the Chattanooga News and Chattanooga Times--recognized from the beginning that Paige was special. In April 1926, shortly after his arrival, he recorded nine strikeouts over six innings against the Atlanta Black Crackers. Part way through the 1927 season, Paige\'s contract was sold to the Birmingham Black Barons of the major Negro National League (NNL). According to Paige\'s first memoir, his contract was for $450 per month, but in his second he said it was for $275. Pitching for the Black Barons, Paige threw hard but was wild and awkward. In his first big game in late June 1927, against the St. Louis Stars, Paige incited a brawl when his fastball hit the hand of St. Louis catcher Mitchell Murray. Murray then charged the mound and Paige raced for the dugout, but Murray flung his bat and struck Paige above the hip. The police were summoned, and the headline of the Birmingham Reporter proclaimed a "Near Riot." Paige improved and matured as a pitcher with help from his teammates, Sam Streeter and Harry Salmon, and his manager, Bill Gatewood. He finished the 1927 season 7-1 with 69 strikeouts and 26 walks in 89 1/3 innings. Over the next two seasons, Paige went 12-5 and 10-9 while recording 176 strikeouts in 1929. (Several sources credit his 1929 strikeout total as the all-time single-season record for the Negro leagues, though there is variation among the sources about the exact number of strikeouts.) On April 29 of that season he recorded 17 strikeouts in a game against the Cuban Stars, which exceeded what was then the major league record of 16 held by Noodles Hahn and Rube Waddell. Six days later he struck out 18 Nashville Elite Giants, a number that was tied in the white majors by Bob Feller in 1938. Due to his increased earning potential, Barons owner R. T. Jackson would "rent" Paige out to other ball clubs for a game or two to draw a decent crowd, with both Jackson and Paige taking a cut. CANNOTANSWER',
'turn_ids': ['C_6abd2040a75d47168a9e4cca9ca3fed5_0_q#0', 'C_6abd2040a75d47168a9e4cca9ca3fed5_0_q#1', 'C_6abd2040a75d47168a9e4cca9ca3fed5_0_q#2', 'C_6abd2040a75d47168a9e4cca9ca3fed5_0_q#3', 'C_6abd2040a75d47168a9e4cca9ca3fed5_0_q#4', 'C_6abd2040a75d47168a9e4cca9ca3fed5_0_q#5', 'C_6abd2040a75d47168a9e4cca9ca3fed5_0_q#6', 'C_6abd2040a75d47168a9e4cca9ca3fed5_0_q#7'],
'questions': ['what did he do in Chattanooga', 'how did he discover him', 'what position did he play', 'how did they help him', 'when did he go to Birmingham', 'how did he feel about this', 'how did he do with this team', 'What made him leave the team'],
'followups': [0, 2, 0, 1, 0, 1, 0, 1],
'yesnos': [2, 2, 2, 2, 2, 2, 2, 2]
'answers': {
'answer_starts': [
[480, 39, 0, 67, 39],
[2300, 2300, 2300],
[848, 1023, 848, 848, 1298],
[2300, 2300, 2300, 2300, 2300],
[600, 600, 600, 634, 600],
[2300, 2300, 2300],
[939, 1431, 848, 848, 1514],
[2106, 2106, 2165]
],
'texts': [
['April 1926, shortly after his arrival, he recorded nine strikeouts over six innings against the Atlanta Black Crackers.', 'Alex Herman, was the player/manager for the Chattanooga White Sox of the minor Negro Southern League. In 1926 he discovered Paige', 'A former friend from the Mobile slums, Alex Herman, was the player/manager for the Chattanooga White Sox of the minor Negro Southern League.', 'manager for the Chattanooga White Sox of the minor Negro Southern League. In 1926 he discovered Paige and offered to pay him $250 per month,', 'Alex Herman, was the player/manager for the Chattanooga White Sox of the minor Negro Southern League. In 1926 he discovered Paige and offered to pay him $250 per month,'],
['CANNOTANSWER', 'CANNOTANSWER', 'CANNOTANSWER'],
['Pitching for the Black Barons,', 'fastball', 'Pitching for', 'Pitching', 'Paige improved and matured as a pitcher with help from his teammates,'], ['CANNOTANSWER', 'CANNOTANSWER', 'CANNOTANSWER', 'CANNOTANSWER', 'CANNOTANSWER'],
["Part way through the 1927 season, Paige's contract was sold to the Birmingham Black Barons", "Part way through the 1927 season, Paige's contract was sold to the Birmingham Black Barons", "Part way through the 1927 season, Paige's contract was sold to the Birmingham Black Barons", "Paige's contract was sold to the Birmingham Black Barons of the major Negro National League (NNL", "Part way through the 1927 season, Paige's contract was sold to the Birmingham Black Barons"], ['CANNOTANSWER', 'CANNOTANSWER', 'CANNOTANSWER'],
['game in late June 1927, against the St. Louis Stars, Paige incited a brawl when his fastball hit the hand of St. Louis catcher Mitchell Murray.', 'He finished the 1927 season 7-1 with 69 strikeouts and 26 walks in 89 1/3 innings.', 'Pitching for the Black Barons, Paige threw hard but was wild and awkward.', 'Pitching for the Black Barons, Paige threw hard but was wild and awkward.', 'Over the next two seasons, Paige went 12-5 and 10-9 while recording 176 strikeouts in 1929. ('],
['Due to his increased earning potential, Barons owner R. T. Jackson would "rent" Paige out to other ball clubs', 'Due to his increased earning potential, Barons owner R. T. Jackson would "rent" Paige out to other ball clubs for a game or two to draw a decent crowd,', 'Jackson would "rent" Paige out to other ball clubs for a game or two to draw a decent crowd, with both Jackson and Paige taking a cut.']
]
},
'orig_answers': {
'answer_starts': [39, 2300, 1298, 2300, 600, 2300, 1514, 2165],
'texts': ['Alex Herman, was the player/manager for the Chattanooga White Sox of the minor Negro Southern League. In 1926 he discovered Paige and offered to pay him $250 per month,', 'CANNOTANSWER', 'Paige improved and matured as a pitcher with help from his teammates,', 'CANNOTANSWER', "Part way through the 1927 season, Paige's contract was sold to the Birmingham Black Barons", 'CANNOTANSWER', 'Over the next two seasons, Paige went 12-5 and 10-9 while recording 176 strikeouts in 1929. (', 'Jackson would "rent" Paige out to other ball clubs for a game or two to draw a decent crowd, with both Jackson and Paige taking a cut.']
},
}
```
### Data Fields
- `dialogue_id`: ID of the dialogue.
- `wikipedia_page_title`: title of the Wikipedia page.
- `background`: first paragraph of the main Wikipedia article.
- `section_tile`: Wikipedia section title.
- `context`: Wikipedia section text.
- `turn_ids`: list of identification of dialogue turns. One list of ids per dialogue.
- `questions`: list of questions in the dialogue. One list of questions per dialogue.
- `followups`: list of followup actions in the dialogue. One list of followups per dialogue. `y`: follow, `m`: maybe follow yp, `n`: don't follow up.
- `yesnos`: list of yes/no in the dialogue. One list of yes/nos per dialogue. `y`: yes, `n`: no, `x`: neither.
- `answers`: dictionary of answers to the questions (validation step of data collection)
- `answer_starts`: list of list of starting offsets. For training, list of single element lists (one answer per question).
- `texts`: list of list of span texts answering questions. For training, list of single element lists (one answer per question).
- `orig_answers`: dictionary of original answers (the ones provided by the teacher in the dialogue)
- `answer_starts`: list of starting offsets
- `texts`: list of span texts answering questions.
### Data Splits
QuAC contains 98,407 QA pairs from 13,594 dialogs. The dialogs were conducted on 8,854 unique sections from 3,611 unique Wikipedia articles, and every dialog contains between four and twelve questions.
The dataset comes with a train/dev split such that there is no overlap in sections across splits. Furthermore, the dev and test sets only include one
dialog per section, in contrast to the training set which can have multiple dialogs per section. Dev and test instances come with five reference answers instead of just one as in the training set; we obtain the extra references to improve the reliability of our evaluations, as questions can have multiple valid answer spans. The test set is not publicly available; instead, researchers must submit their models to the [leaderboard](http://quac.ai), which will run the model on our hidden test set.
The training set contains 83,568 questions (11,567 dialogues), while 7,354 (1,000) and 7,353 (1,002) separate questions are reserved for the dev and test set respectively.
## Dataset Creation
### Curation Rationale
Please refer to the [Datasheet](https://quac.ai/datasheet.pdf) from the authors of the dataset.
### Source Data
Please refer to the [Datasheet](https://quac.ai/datasheet.pdf) from the authors of the dataset.
#### Initial Data Collection and Normalization
Please refer to the [Datasheet](https://quac.ai/datasheet.pdf) from the authors of the dataset.
#### Who are the source language producers?
Please refer to the [Datasheet](https://quac.ai/datasheet.pdf) from the authors of the dataset.
### Annotations
Please refer to the [Datasheet](https://quac.ai/datasheet.pdf) from the authors of the dataset.
#### Annotation process
Please refer to the [Datasheet](https://quac.ai/datasheet.pdf) from the authors of the dataset.
#### Who are the annotators?
Please refer to the [Datasheet](https://quac.ai/datasheet.pdf) from the authors of the dataset.
### Personal and Sensitive Information
Please refer to the [Datasheet](https://quac.ai/datasheet.pdf) from the authors of the dataset.
## Considerations for Using the Data
### Social Impact of Dataset
Please refer to the [Datasheet](https://quac.ai/datasheet.pdf) from the authors of the dataset.
### Discussion of Biases
Please refer to the [Datasheet](https://quac.ai/datasheet.pdf) from the authors of the dataset.
### Other Known Limitations
Please refer to the [Datasheet](https://quac.ai/datasheet.pdf) from the authors of the dataset.
## Additional Information
### Dataset Curators
Please refer to the [Datasheet](https://quac.ai/datasheet.pdf) from the authors of the dataset.
### Licensing Information
The dataset is distributed under the MIT license.
### Citation Information
Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:
```
@inproceedings{choi-etal-2018-quac,
title = "{Q}u{AC}: Question Answering in Context",
author = "Choi, Eunsol and
He, He and
Iyyer, Mohit and
Yatskar, Mark and
Yih, Wen-tau and
Choi, Yejin and
Liang, Percy and
Zettlemoyer, Luke",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
month = oct # "-" # nov,
year = "2018",
address = "Brussels, Belgium",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D18-1241",
doi = "10.18653/v1/D18-1241",
pages = "2174--2184",
abstract = "We present QuAC, a dataset for Question Answering in Context that contains 14K information-seeking QA dialogs (100K questions in total). The dialogs involve two crowd workers: (1) a student who poses a sequence of freeform questions to learn as much as possible about a hidden Wikipedia text, and (2) a teacher who answers the questions by providing short excerpts from the text. QuAC introduces challenges not found in existing machine comprehension datasets: its questions are often more open-ended, unanswerable, or only meaningful within the dialog context, as we show in a detailed qualitative evaluation. We also report results for a number of reference models, including a recently state-of-the-art reading comprehension architecture extended to model dialog context. Our best model underperforms humans by 20 F1, suggesting that there is significant room for future work on this data. Dataset, baseline, and leaderboard available at \url{http://quac.ai}.",
}
```
### Contributions
Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset. |
quail | ---
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
pretty_name: Question Answering for Artificial Intelligence (QuAIL)
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- multiple-choice
task_ids:
- multiple-choice-qa
paperswithcode_id: quail
dataset_info:
features:
- name: id
dtype: string
- name: context_id
dtype: string
- name: question_id
dtype: string
- name: domain
dtype: string
- name: metadata
struct:
- name: author
dtype: string
- name: title
dtype: string
- name: url
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: question_type
dtype: string
- name: answers
sequence: string
- name: correct_answer_id
dtype: int32
config_name: quail
splits:
- name: train
num_bytes: 23432697
num_examples: 10246
- name: validation
num_bytes: 4989579
num_examples: 2164
- name: challenge
num_bytes: 1199840
num_examples: 556
download_size: 6402933
dataset_size: 29622116
---
# Dataset Card for "quail"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://text-machine-lab.github.io/blog/2020/quail/](https://text-machine-lab.github.io/blog/2020/quail/)
- **Repository:** https://github.com/text-machine-lab/quail
- **Paper:** [Getting Closer to AI Complete Question Answering: A Set of Prerequisite Real Tasks](https://doi.org/10.1609/aaai.v34i05.6398 )
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 6.41 MB
- **Size of the generated dataset:** 29.62 MB
- **Total amount of disk used:** 36.03 MB
### Dataset Summary
QuAIL is a reading comprehension dataset. QuAIL contains 15K multi-choice questions in texts 300-350 tokens long 4 domains (news, user stories, fiction, blogs).QuAIL is balanced and annotated for question types.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### quail
- **Size of downloaded dataset files:** 6.41 MB
- **Size of the generated dataset:** 29.62 MB
- **Total amount of disk used:** 36.03 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answers": ["the cousin is not friendly", "the cousin could have been pretier", "not enough information", "the cousin was too nice"],
"context": "\"That fall came and I went back to Michigan and the school year went by and summer came and I never really thought about it. I'm...",
"context_id": "f001",
"correct_answer_id": 0,
"domain": "fiction",
"id": "f001_19",
"metadata": {
"author": "Joseph Devon",
"title": "Black Eyed Susan",
"url": "http://manybooks.net/pages/devonjother08black_eyed_susan/0.html"
},
"question": "After the events in the text what does the author think about the cousin?",
"question_id": "19",
"question_type": "Subsequent_state"
}
```
### Data Fields
The data fields are the same among all splits.
#### quail
- `id`: a `string` feature.
- `context_id`: a `string` feature.
- `question_id`: a `string` feature.
- `domain`: a `string` feature.
- `author`: a `string` feature.
- `title`: a `string` feature.
- `url`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `question_type`: a `string` feature.
- `answers`: a `list` of `string` features.
- `correct_answer_id`: a `int32` feature.
### Data Splits
|name |train|challenge|validation|
|-----|----:|--------:|---------:|
|quail|10246| 556| 2164|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{DBLP:conf/aaai/RogersKDR20,
author = {Anna Rogers and
Olga Kovaleva and
Matthew Downey and
Anna Rumshisky},
title = {Getting Closer to {AI} Complete Question Answering: {A} Set of Prerequisite
Real Tasks},
booktitle = {The Thirty-Fourth {AAAI} Conference on Artificial Intelligence, {AAAI}
2020, The Thirty-Second Innovative Applications of Artificial Intelligence
Conference, {IAAI} 2020, The Tenth {AAAI} Symposium on Educational
Advances in Artificial Intelligence, {EAAI} 2020, New York, NY, USA,
February 7-12, 2020},
pages = {8722--8731},
publisher = {{AAAI} Press},
year = {2020},
url = {https://aaai.org/ojs/index.php/AAAI/article/view/6398},
timestamp = {Thu, 04 Jun 2020 13:18:48 +0200},
biburl = {https://dblp.org/rec/conf/aaai/RogersKDR20.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@sai-prasanna](https://github.com/sai-prasanna), [@ngdodd](https://github.com/ngdodd) for adding this dataset. |
quarel | ---
language:
- en
paperswithcode_id: quarel
pretty_name: QuaRel
dataset_info:
features:
- name: id
dtype: string
- name: answer_index
dtype: int32
- name: logical_forms
sequence: string
- name: logical_form_pretty
dtype: string
- name: world_literals
sequence:
- name: world1
dtype: string
- name: world2
dtype: string
- name: question
dtype: string
splits:
- name: train
num_bytes: 1072874
num_examples: 1941
- name: test
num_bytes: 307588
num_examples: 552
- name: validation
num_bytes: 154308
num_examples: 278
download_size: 631370
dataset_size: 1534770
---
# Dataset Card for "quarel"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://allenai.org/data/quarel](https://allenai.org/data/quarel)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 0.63 MB
- **Size of the generated dataset:** 1.53 MB
- **Total amount of disk used:** 2.17 MB
### Dataset Summary
QuaRel is a crowdsourced dataset of 2771 multiple-choice story questions, including their logical forms.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 0.63 MB
- **Size of the generated dataset:** 1.53 MB
- **Total amount of disk used:** 2.17 MB
An example of 'train' looks as follows.
```
{
"answer_index": 0,
"id": "QuaRel_V1_B5_1403",
"logical_form_pretty": "qrel(time, lower, world1) -> qrel(distance, higher, world2) ; qrel(distance, higher, world1)",
"logical_forms": ["(infer (time lower world1) (distance higher world2) (distance higher world1))", "(infer (time lower world2) (distance higher world1) (distance higher world2))"],
"question": "John and Rita are going for a run. Rita gets tired and takes a break on the park bench. After twenty minutes in the park, who has run farther? (A) John (B) Rita",
"world_literals": {
"world1": ["Rita"],
"world2": ["John"]
}
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `id`: a `string` feature.
- `answer_index`: a `int32` feature.
- `logical_forms`: a `list` of `string` features.
- `logical_form_pretty`: a `string` feature.
- `world_literals`: a dictionary feature containing:
- `world1`: a `string` feature.
- `world2`: a `string` feature.
- `question`: a `string` feature.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default| 1941| 278| 552|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{quarel_v1,
title={QuaRel: A Dataset and Models for Answering Questions about Qualitative Relationships},
author={Oyvind Tafjord, Peter Clark, Matt Gardner, Wen-tau Yih, Ashish Sabharwal},
year={2018},
journal={arXiv:1805.05377v1}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
quartz | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
- open-domain-qa
paperswithcode_id: quartz
pretty_name: QuaRTz
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: answerKey
dtype: string
- name: para
dtype: string
- name: para_id
dtype: string
- name: para_anno
struct:
- name: effect_prop
dtype: string
- name: cause_dir_str
dtype: string
- name: effect_dir_str
dtype: string
- name: cause_dir_sign
dtype: string
- name: effect_dir_sign
dtype: string
- name: cause_prop
dtype: string
- name: question_anno
struct:
- name: more_effect_dir
dtype: string
- name: less_effect_dir
dtype: string
- name: less_cause_prop
dtype: string
- name: more_effect_prop
dtype: string
- name: less_effect_prop
dtype: string
- name: less_cause_dir
dtype: string
splits:
- name: test
num_bytes: 351374
num_examples: 784
- name: train
num_bytes: 1197525
num_examples: 2696
- name: validation
num_bytes: 175871
num_examples: 384
download_size: 497354
dataset_size: 1724770
---
# Dataset Card for "quartz"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://allenai.org/data/quartz](https://allenai.org/data/quartz)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 0.49 MB
- **Size of the generated dataset:** 1.72 MB
- **Total amount of disk used:** 2.22 MB
### Dataset Summary
QuaRTz is a crowdsourced dataset of 3864 multiple-choice questions about open domain qualitative relationships. Each
question is paired with one of 405 different background sentences (sometimes short paragraphs).
The QuaRTz dataset V1 contains 3864 questions about open domain qualitative relationships. Each question is paired with
one of 405 different background sentences (sometimes short paragraphs).
The dataset is split into train (2696), dev (384) and test (784). A background sentence will only appear in a single split.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 0.49 MB
- **Size of the generated dataset:** 1.72 MB
- **Total amount of disk used:** 2.22 MB
An example of 'train' looks as follows.
```
{
"answerKey": "A",
"choices": {
"label": ["A", "B"],
"text": ["higher", "lower"]
},
"id": "QRQA-10116-3",
"para": "Electrons at lower energy levels, which are closer to the nucleus, have less energy.",
"para_anno": {
"cause_dir_sign": "LESS",
"cause_dir_str": "closer",
"cause_prop": "distance from a nucleus",
"effect_dir_sign": "LESS",
"effect_dir_str": "less",
"effect_prop": "energy"
},
"para_id": "QRSent-10116",
"question": "Electrons further away from a nucleus have _____ energy levels than close ones.",
"question_anno": {
"less_cause_dir": "electron energy levels",
"less_cause_prop": "nucleus",
"less_effect_dir": "lower",
"less_effect_prop": "electron energy levels",
"more_effect_dir": "higher",
"more_effect_prop": "electron energy levels"
}
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `id`: a `string` feature.
- `question`: a `string` feature.
- `choices`: a dictionary feature containing:
- `text`: a `string` feature.
- `label`: a `string` feature.
- `answerKey`: a `string` feature.
- `para`: a `string` feature.
- `para_id`: a `string` feature.
- `effect_prop`: a `string` feature.
- `cause_dir_str`: a `string` feature.
- `effect_dir_str`: a `string` feature.
- `cause_dir_sign`: a `string` feature.
- `effect_dir_sign`: a `string` feature.
- `cause_prop`: a `string` feature.
- `more_effect_dir`: a `string` feature.
- `less_effect_dir`: a `string` feature.
- `less_cause_prop`: a `string` feature.
- `more_effect_prop`: a `string` feature.
- `less_effect_prop`: a `string` feature.
- `less_cause_dir`: a `string` feature.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default| 2696| 384| 784|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The dataset is licensed under Creative Commons [Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/).
### Citation Information
```
@InProceedings{quartz,
author = {Oyvind Tafjord and Matt Gardner and Kevin Lin and Peter Clark},
title = {"QUARTZ: An Open-Domain Dataset of Qualitative Relationship
Questions"},
year = {"2019"},
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |