sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
sequencelengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
sequencelengths 0
25
| languages
sequencelengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
sequencelengths 0
352
| processed_texts
sequencelengths 1
353
| tokens_length
sequencelengths 1
353
| input_texts
sequencelengths 1
40
| embeddings
sequencelengths 768
768
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
15ef643450d589d5883e289ffadeb03563e80a9e |
# Dataset Card for Acronym Identification Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://sites.google.com/view/sdu-aaai21/shared-task
- **Repository:** https://github.com/amirveyseh/AAAI-21-SDU-shared-task-1-AI
- **Paper:** [What Does This Acronym Mean? Introducing a New Dataset for Acronym Identification and Disambiguation](https://arxiv.org/pdf/2010.14678v1.pdf)
- **Leaderboard:** https://competitions.codalab.org/competitions/26609
- **Point of Contact:** [More Information Needed]
### Dataset Summary
This dataset contains the training, validation, and test data for the **Shared Task 1: Acronym Identification** of the AAAI-21 Workshop on Scientific Document Understanding.
### Supported Tasks and Leaderboards
The dataset supports an `acronym-identification` task, where the aim is to predic which tokens in a pre-tokenized sentence correspond to acronyms. The dataset was released for a Shared Task which supported a [leaderboard](https://competitions.codalab.org/competitions/26609).
### Languages
The sentences in the dataset are in English (`en`).
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```
{'id': 'TR-0',
'labels': [4, 4, 4, 4, 0, 2, 2, 4, 1, 4, 4, 4, 4, 4, 4, 4, 4, 4],
'tokens': ['What',
'is',
'here',
'called',
'controlled',
'natural',
'language',
'(',
'CNL',
')',
'has',
'traditionally',
'been',
'given',
'many',
'different',
'names',
'.']}
```
Please note that in test set sentences only the `id` and `tokens` fields are available. `labels` can be ignored for test set. Labels in the test set are all `O`
### Data Fields
The data instances have the following fields:
- `id`: a `string` variable representing the example id, unique across the full dataset
- `tokens`: a list of `string` variables representing the word-tokenized sentence
- `labels`: a list of `categorical` variables with possible values `["B-long", "B-short", "I-long", "I-short", "O"]` corresponding to a BIO scheme. `-long` corresponds to the expanded acronym, such as *controlled natural language* here, and `-short` to the abbrviation, `CNL` here.
### Data Splits
The training, validation, and test set contain `14,006`, `1,717`, and `1750` sentences respectively.
## Dataset Creation
### Curation Rationale
> First, most of the existing datasets for acronym identification (AI) are either limited in their sizes or created using simple rule-based methods.
> This is unfortunate as rules are in general not able to capture all the diverse forms to express acronyms and their long forms in text.
> Second, most of the existing datasets are in the medical domain, ignoring the challenges in other scientific domains.
> In order to address these limitations this paper introduces two new datasets for Acronym Identification.
> Notably, our datasets are annotated by human to achieve high quality and have substantially larger numbers of examples than the existing AI datasets in the non-medical domain.
### Source Data
#### Initial Data Collection and Normalization
> In order to prepare a corpus for acronym annotation, we collect a corpus of 6,786 English papers from arXiv.
> These papers consist of 2,031,592 sentences that would be used for data annotation for AI in this work.
The dataset paper does not report the exact tokenization method.
#### Who are the source language producers?
The language was comes from papers hosted on the online digital archive [arXiv](https://arxiv.org/). No more information is available on the selection process or identity of the writers.
### Annotations
#### Annotation process
> Each sentence for annotation needs to contain at least one word in which more than half of the characters in are capital letters (i.e., acronym candidates).
> Afterward, we search for a sub-sequence of words in which the concatenation of the first one, two or three characters of the words (in the order of the words in the sub-sequence could form an acronym candidate.
> We call the sub-sequence a long form candidate. If we cannot find any long form candidate, we remove the sentence.
> Using this process, we end up with 17,506 sentences to be annotated manually by the annotators from Amazon Mechanical Turk (MTurk).
> In particular, we create a HIT for each sentence and ask the workers to annotate the short forms and the long forms in the sentence.
> In case of disagreements, if two out of three workers agree on an annotation, we use majority voting to decide the correct annotation.
> Otherwise, a fourth annotator is hired to resolve the conflict
#### Who are the annotators?
Workers were recruited through Amazon MEchanical Turk and paid $0.05 per annotation. No further demographic information is provided.
### Personal and Sensitive Information
Papers published on arXiv are unlikely to contain much personal information, although some do include some poorly chosen examples revealing personal details, so the data should be used with care.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The dataset provided for this shared task is licensed under CC BY-NC-SA 4.0 international license.
### Citation Information
```
@inproceedings{Veyseh2020,
author = {Amir Pouran Ben Veyseh and
Franck Dernoncourt and
Quan Hung Tran and
Thien Huu Nguyen},
editor = {Donia Scott and
N{\'{u}}ria Bel and
Chengqing Zong},
title = {What Does This Acronym Mean? Introducing a New Dataset for Acronym
Identification and Disambiguation},
booktitle = {Proceedings of the 28th International Conference on Computational
Linguistics, {COLING} 2020, Barcelona, Spain (Online), December 8-13,
2020},
pages = {3285--3301},
publisher = {International Committee on Computational Linguistics},
year = {2020},
url = {https://doi.org/10.18653/v1/2020.coling-main.292},
doi = {10.18653/v1/2020.coling-main.292}
}
```
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. | acronym_identification | [
"task_categories:token-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:mit",
"acronym-identification",
"arxiv:2010.14678",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": [], "paperswithcode_id": "acronym-identification", "pretty_name": "Acronym Identification Dataset", "tags": ["acronym-identification"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "labels", "sequence": {"class_label": {"names": {"0": "B-long", "1": "B-short", "2": "I-long", "3": "I-short", "4": "O"}}}}], "splits": [{"name": "train", "num_bytes": 7792771, "num_examples": 14006}, {"name": "validation", "num_bytes": 952689, "num_examples": 1717}, {"name": "test", "num_bytes": 987712, "num_examples": 1750}], "download_size": 2071007, "dataset_size": 9733172}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "train-eval-index": [{"config": "default", "task": "token-classification", "task_id": "entity_extraction", "splits": {"eval_split": "test"}, "col_mapping": {"tokens": "tokens", "labels": "tags"}}]} | 2024-01-09T11:39:57+00:00 | [
"2010.14678"
] | [
"en"
] | TAGS
#task_categories-token-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #acronym-identification #arxiv-2010.14678 #region-us
|
# Dataset Card for Acronym Identification Dataset
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper: What Does This Acronym Mean? Introducing a New Dataset for Acronym Identification and Disambiguation
- Leaderboard: URL
- Point of Contact:
### Dataset Summary
This dataset contains the training, validation, and test data for the Shared Task 1: Acronym Identification of the AAAI-21 Workshop on Scientific Document Understanding.
### Supported Tasks and Leaderboards
The dataset supports an 'acronym-identification' task, where the aim is to predic which tokens in a pre-tokenized sentence correspond to acronyms. The dataset was released for a Shared Task which supported a leaderboard.
### Languages
The sentences in the dataset are in English ('en').
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
Please note that in test set sentences only the 'id' and 'tokens' fields are available. 'labels' can be ignored for test set. Labels in the test set are all 'O'
### Data Fields
The data instances have the following fields:
- 'id': a 'string' variable representing the example id, unique across the full dataset
- 'tokens': a list of 'string' variables representing the word-tokenized sentence
- 'labels': a list of 'categorical' variables with possible values '["B-long", "B-short", "I-long", "I-short", "O"]' corresponding to a BIO scheme. '-long' corresponds to the expanded acronym, such as *controlled natural language* here, and '-short' to the abbrviation, 'CNL' here.
### Data Splits
The training, validation, and test set contain '14,006', '1,717', and '1750' sentences respectively.
## Dataset Creation
### Curation Rationale
> First, most of the existing datasets for acronym identification (AI) are either limited in their sizes or created using simple rule-based methods.
> This is unfortunate as rules are in general not able to capture all the diverse forms to express acronyms and their long forms in text.
> Second, most of the existing datasets are in the medical domain, ignoring the challenges in other scientific domains.
> In order to address these limitations this paper introduces two new datasets for Acronym Identification.
> Notably, our datasets are annotated by human to achieve high quality and have substantially larger numbers of examples than the existing AI datasets in the non-medical domain.
### Source Data
#### Initial Data Collection and Normalization
> In order to prepare a corpus for acronym annotation, we collect a corpus of 6,786 English papers from arXiv.
> These papers consist of 2,031,592 sentences that would be used for data annotation for AI in this work.
The dataset paper does not report the exact tokenization method.
#### Who are the source language producers?
The language was comes from papers hosted on the online digital archive arXiv. No more information is available on the selection process or identity of the writers.
### Annotations
#### Annotation process
> Each sentence for annotation needs to contain at least one word in which more than half of the characters in are capital letters (i.e., acronym candidates).
> Afterward, we search for a sub-sequence of words in which the concatenation of the first one, two or three characters of the words (in the order of the words in the sub-sequence could form an acronym candidate.
> We call the sub-sequence a long form candidate. If we cannot find any long form candidate, we remove the sentence.
> Using this process, we end up with 17,506 sentences to be annotated manually by the annotators from Amazon Mechanical Turk (MTurk).
> In particular, we create a HIT for each sentence and ask the workers to annotate the short forms and the long forms in the sentence.
> In case of disagreements, if two out of three workers agree on an annotation, we use majority voting to decide the correct annotation.
> Otherwise, a fourth annotator is hired to resolve the conflict
#### Who are the annotators?
Workers were recruited through Amazon MEchanical Turk and paid $0.05 per annotation. No further demographic information is provided.
### Personal and Sensitive Information
Papers published on arXiv are unlikely to contain much personal information, although some do include some poorly chosen examples revealing personal details, so the data should be used with care.
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
### Licensing Information
The dataset provided for this shared task is licensed under CC BY-NC-SA 4.0 international license.
### Contributions
Thanks to @abhishekkrthakur for adding this dataset. | [
"# Dataset Card for Acronym Identification Dataset",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: What Does This Acronym Mean? Introducing a New Dataset for Acronym Identification and Disambiguation\n- Leaderboard: URL\n- Point of Contact:",
"### Dataset Summary\n\nThis dataset contains the training, validation, and test data for the Shared Task 1: Acronym Identification of the AAAI-21 Workshop on Scientific Document Understanding.",
"### Supported Tasks and Leaderboards\n\nThe dataset supports an 'acronym-identification' task, where the aim is to predic which tokens in a pre-tokenized sentence correspond to acronyms. The dataset was released for a Shared Task which supported a leaderboard.",
"### Languages\n\nThe sentences in the dataset are in English ('en').",
"## Dataset Structure",
"### Data Instances\n\nA sample from the training set is provided below:\n\n\n\nPlease note that in test set sentences only the 'id' and 'tokens' fields are available. 'labels' can be ignored for test set. Labels in the test set are all 'O'",
"### Data Fields\n\nThe data instances have the following fields:\n\n- 'id': a 'string' variable representing the example id, unique across the full dataset\n- 'tokens': a list of 'string' variables representing the word-tokenized sentence\n- 'labels': a list of 'categorical' variables with possible values '[\"B-long\", \"B-short\", \"I-long\", \"I-short\", \"O\"]' corresponding to a BIO scheme. '-long' corresponds to the expanded acronym, such as *controlled natural language* here, and '-short' to the abbrviation, 'CNL' here.",
"### Data Splits\n\nThe training, validation, and test set contain '14,006', '1,717', and '1750' sentences respectively.",
"## Dataset Creation",
"### Curation Rationale\n\n> First, most of the existing datasets for acronym identification (AI) are either limited in their sizes or created using simple rule-based methods.\n> This is unfortunate as rules are in general not able to capture all the diverse forms to express acronyms and their long forms in text.\n> Second, most of the existing datasets are in the medical domain, ignoring the challenges in other scientific domains.\n> In order to address these limitations this paper introduces two new datasets for Acronym Identification.\n> Notably, our datasets are annotated by human to achieve high quality and have substantially larger numbers of examples than the existing AI datasets in the non-medical domain.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n> In order to prepare a corpus for acronym annotation, we collect a corpus of 6,786 English papers from arXiv.\n> These papers consist of 2,031,592 sentences that would be used for data annotation for AI in this work.\n\nThe dataset paper does not report the exact tokenization method.",
"#### Who are the source language producers?\n\nThe language was comes from papers hosted on the online digital archive arXiv. No more information is available on the selection process or identity of the writers.",
"### Annotations",
"#### Annotation process\n\n> Each sentence for annotation needs to contain at least one word in which more than half of the characters in are capital letters (i.e., acronym candidates).\n> Afterward, we search for a sub-sequence of words in which the concatenation of the first one, two or three characters of the words (in the order of the words in the sub-sequence could form an acronym candidate.\n> We call the sub-sequence a long form candidate. If we cannot find any long form candidate, we remove the sentence.\n> Using this process, we end up with 17,506 sentences to be annotated manually by the annotators from Amazon Mechanical Turk (MTurk).\n> In particular, we create a HIT for each sentence and ask the workers to annotate the short forms and the long forms in the sentence.\n> In case of disagreements, if two out of three workers agree on an annotation, we use majority voting to decide the correct annotation.\n> Otherwise, a fourth annotator is hired to resolve the conflict",
"#### Who are the annotators?\n\nWorkers were recruited through Amazon MEchanical Turk and paid $0.05 per annotation. No further demographic information is provided.",
"### Personal and Sensitive Information\n\nPapers published on arXiv are unlikely to contain much personal information, although some do include some poorly chosen examples revealing personal details, so the data should be used with care.",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\nDataset provided for research purposes only. Please check dataset license for additional information.",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nThe dataset provided for this shared task is licensed under CC BY-NC-SA 4.0 international license.",
"### Contributions\n\nThanks to @abhishekkrthakur for adding this dataset."
] | [
"TAGS\n#task_categories-token-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #acronym-identification #arxiv-2010.14678 #region-us \n",
"# Dataset Card for Acronym Identification Dataset",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: What Does This Acronym Mean? Introducing a New Dataset for Acronym Identification and Disambiguation\n- Leaderboard: URL\n- Point of Contact:",
"### Dataset Summary\n\nThis dataset contains the training, validation, and test data for the Shared Task 1: Acronym Identification of the AAAI-21 Workshop on Scientific Document Understanding.",
"### Supported Tasks and Leaderboards\n\nThe dataset supports an 'acronym-identification' task, where the aim is to predic which tokens in a pre-tokenized sentence correspond to acronyms. The dataset was released for a Shared Task which supported a leaderboard.",
"### Languages\n\nThe sentences in the dataset are in English ('en').",
"## Dataset Structure",
"### Data Instances\n\nA sample from the training set is provided below:\n\n\n\nPlease note that in test set sentences only the 'id' and 'tokens' fields are available. 'labels' can be ignored for test set. Labels in the test set are all 'O'",
"### Data Fields\n\nThe data instances have the following fields:\n\n- 'id': a 'string' variable representing the example id, unique across the full dataset\n- 'tokens': a list of 'string' variables representing the word-tokenized sentence\n- 'labels': a list of 'categorical' variables with possible values '[\"B-long\", \"B-short\", \"I-long\", \"I-short\", \"O\"]' corresponding to a BIO scheme. '-long' corresponds to the expanded acronym, such as *controlled natural language* here, and '-short' to the abbrviation, 'CNL' here.",
"### Data Splits\n\nThe training, validation, and test set contain '14,006', '1,717', and '1750' sentences respectively.",
"## Dataset Creation",
"### Curation Rationale\n\n> First, most of the existing datasets for acronym identification (AI) are either limited in their sizes or created using simple rule-based methods.\n> This is unfortunate as rules are in general not able to capture all the diverse forms to express acronyms and their long forms in text.\n> Second, most of the existing datasets are in the medical domain, ignoring the challenges in other scientific domains.\n> In order to address these limitations this paper introduces two new datasets for Acronym Identification.\n> Notably, our datasets are annotated by human to achieve high quality and have substantially larger numbers of examples than the existing AI datasets in the non-medical domain.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n> In order to prepare a corpus for acronym annotation, we collect a corpus of 6,786 English papers from arXiv.\n> These papers consist of 2,031,592 sentences that would be used for data annotation for AI in this work.\n\nThe dataset paper does not report the exact tokenization method.",
"#### Who are the source language producers?\n\nThe language was comes from papers hosted on the online digital archive arXiv. No more information is available on the selection process or identity of the writers.",
"### Annotations",
"#### Annotation process\n\n> Each sentence for annotation needs to contain at least one word in which more than half of the characters in are capital letters (i.e., acronym candidates).\n> Afterward, we search for a sub-sequence of words in which the concatenation of the first one, two or three characters of the words (in the order of the words in the sub-sequence could form an acronym candidate.\n> We call the sub-sequence a long form candidate. If we cannot find any long form candidate, we remove the sentence.\n> Using this process, we end up with 17,506 sentences to be annotated manually by the annotators from Amazon Mechanical Turk (MTurk).\n> In particular, we create a HIT for each sentence and ask the workers to annotate the short forms and the long forms in the sentence.\n> In case of disagreements, if two out of three workers agree on an annotation, we use majority voting to decide the correct annotation.\n> Otherwise, a fourth annotator is hired to resolve the conflict",
"#### Who are the annotators?\n\nWorkers were recruited through Amazon MEchanical Turk and paid $0.05 per annotation. No further demographic information is provided.",
"### Personal and Sensitive Information\n\nPapers published on arXiv are unlikely to contain much personal information, although some do include some poorly chosen examples revealing personal details, so the data should be used with care.",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\nDataset provided for research purposes only. Please check dataset license for additional information.",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nThe dataset provided for this shared task is licensed under CC BY-NC-SA 4.0 international license.",
"### Contributions\n\nThanks to @abhishekkrthakur for adding this dataset."
] | [
90,
12,
120,
53,
44,
69,
19,
6,
61,
155,
36,
5,
168,
4,
80,
46,
5,
246,
37,
50,
8,
7,
8,
25,
5,
6,
28,
20
] | [
"passage: TAGS\n#task_categories-token-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #acronym-identification #arxiv-2010.14678 #region-us \n# Dataset Card for Acronym Identification Dataset## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: What Does This Acronym Mean? Introducing a New Dataset for Acronym Identification and Disambiguation\n- Leaderboard: URL\n- Point of Contact:### Dataset Summary\n\nThis dataset contains the training, validation, and test data for the Shared Task 1: Acronym Identification of the AAAI-21 Workshop on Scientific Document Understanding.### Supported Tasks and Leaderboards\n\nThe dataset supports an 'acronym-identification' task, where the aim is to predic which tokens in a pre-tokenized sentence correspond to acronyms. The dataset was released for a Shared Task which supported a leaderboard.### Languages\n\nThe sentences in the dataset are in English ('en').## Dataset Structure### Data Instances\n\nA sample from the training set is provided below:\n\n\n\nPlease note that in test set sentences only the 'id' and 'tokens' fields are available. 'labels' can be ignored for test set. Labels in the test set are all 'O'",
"passage: ### Data Fields\n\nThe data instances have the following fields:\n\n- 'id': a 'string' variable representing the example id, unique across the full dataset\n- 'tokens': a list of 'string' variables representing the word-tokenized sentence\n- 'labels': a list of 'categorical' variables with possible values '[\"B-long\", \"B-short\", \"I-long\", \"I-short\", \"O\"]' corresponding to a BIO scheme. '-long' corresponds to the expanded acronym, such as *controlled natural language* here, and '-short' to the abbrviation, 'CNL' here.### Data Splits\n\nThe training, validation, and test set contain '14,006', '1,717', and '1750' sentences respectively.## Dataset Creation### Curation Rationale\n\n> First, most of the existing datasets for acronym identification (AI) are either limited in their sizes or created using simple rule-based methods.\n> This is unfortunate as rules are in general not able to capture all the diverse forms to express acronyms and their long forms in text.\n> Second, most of the existing datasets are in the medical domain, ignoring the challenges in other scientific domains.\n> In order to address these limitations this paper introduces two new datasets for Acronym Identification.\n> Notably, our datasets are annotated by human to achieve high quality and have substantially larger numbers of examples than the existing AI datasets in the non-medical domain.### Source Data#### Initial Data Collection and Normalization\n\n> In order to prepare a corpus for acronym annotation, we collect a corpus of 6,786 English papers from arXiv.\n> These papers consist of 2,031,592 sentences that would be used for data annotation for AI in this work.\n\nThe dataset paper does not report the exact tokenization method.#### Who are the source language producers?\n\nThe language was comes from papers hosted on the online digital archive arXiv. No more information is available on the selection process or identity of the writers.### Annotations"
] | [
-0.02436942793428898,
0.15746626257896423,
-0.0045557101257145405,
-0.004116695839911699,
0.06653529405593872,
0.020368661731481552,
0.08753132820129395,
0.12000302225351334,
0.04328465834259987,
0.14245560765266418,
-0.023542236536741257,
-0.017182834446430206,
0.07694245874881744,
0.06773863732814789,
-0.009644675068557262,
-0.16844868659973145,
0.03193097561597824,
-0.07938279211521149,
0.03841152414679527,
0.0644010379910469,
0.10786054283380508,
-0.07257635146379471,
0.03590022400021553,
-0.0534546822309494,
0.016771500930190086,
-0.008486669510602951,
0.004051439464092255,
-0.04512758553028107,
0.0629458948969841,
0.033782947808504105,
0.06107358634471893,
0.006501849740743637,
0.03456287086009979,
-0.25850945711135864,
0.01884584128856659,
0.08002502471208572,
0.03697295859456062,
0.06223522871732712,
0.04135850444436073,
-0.019327957183122635,
0.07089566439390182,
-0.15033939480781555,
0.03943484276533127,
0.049327775835990906,
-0.07786069810390472,
-0.1609259694814682,
-0.12021106481552124,
0.018018927425146103,
0.04867075756192207,
0.07934072613716125,
-0.025782736018300056,
0.11895044147968292,
-0.003951067104935646,
0.019812654703855515,
0.07274379581212997,
-0.12691885232925415,
-0.02860628068447113,
0.006653891876339912,
0.07706785947084427,
0.08807669579982758,
-0.09962241351604462,
0.0030105244368314743,
-0.026490848511457443,
0.0189279206097126,
-0.005739952437579632,
-0.017110994085669518,
-0.0588131919503212,
0.023492615669965744,
-0.07072442770004272,
-0.06105964630842209,
0.14391472935676575,
0.00823435839265585,
-0.06219164654612541,
-0.15704220533370972,
-0.003459086176007986,
0.04928164556622505,
-0.0064317346550524235,
-0.023698564618825912,
0.033039044588804245,
-0.020651139318943024,
0.05837758630514145,
-0.05261233448982239,
-0.10647344589233398,
0.0314110592007637,
-0.07078653573989868,
0.022271182388067245,
0.023167602717876434,
0.0338360033929348,
0.011661384254693985,
0.04176831245422363,
0.0024637263268232346,
-0.073590949177742,
-0.04310765117406845,
-0.07195077836513519,
-0.13380014896392822,
-0.03622274100780487,
-0.04250864312052727,
-0.061240147799253464,
0.06660045683383942,
0.1372918337583542,
-0.03394046425819397,
-0.024559086188673973,
-0.05010280758142471,
-0.008255088701844215,
0.08738523721694946,
0.10996986925601959,
-0.06844338029623032,
-0.06584642827510834,
0.02256035804748535,
0.03386485204100609,
0.0005199555307626724,
-0.0029873650055378675,
0.025149241089820862,
0.01972164586186409,
0.031854696571826935,
0.06288958340883255,
0.09095167368650436,
-0.013873305171728134,
-0.05550076439976692,
-0.04369162768125534,
0.08906339108943939,
-0.13972251117229462,
0.03713972866535187,
0.03738410770893097,
-0.030137954279780388,
0.021284865215420723,
0.007090035360306501,
0.02698013372719288,
-0.08916816860437393,
0.03974464535713196,
-0.031106293201446533,
-0.02949976921081543,
-0.06678986549377441,
-0.09610333293676376,
0.05730407312512398,
-0.032615575939416885,
-0.06149866431951523,
-0.03910210728645325,
-0.045445993542671204,
-0.054877158254384995,
0.03884132206439972,
-0.0553155243396759,
0.01274839136749506,
0.03393860533833504,
0.014964180067181587,
-0.004929428920149803,
0.0140589764341712,
-0.004664488136768341,
-0.03337263688445091,
0.03067297302186489,
-0.07128813862800598,
0.034820087254047394,
0.020452134311199188,
0.028578728437423706,
-0.09123574197292328,
0.03281628340482712,
-0.07493849098682404,
0.08386531472206116,
-0.10089446604251862,
-0.02739974483847618,
-0.127115398645401,
-0.0051564062014222145,
0.04829440265893936,
0.015221898443996906,
0.01482123602181673,
0.08467727899551392,
-0.15780985355377197,
0.0024785352870821953,
0.08446993678808212,
-0.06091941148042679,
-0.07557222247123718,
0.09255123883485794,
-0.043882325291633606,
0.051072798669338226,
0.07636329531669617,
0.06761818379163742,
0.033605288714170456,
-0.07718391716480255,
-0.1179257333278656,
-0.10205134749412537,
0.029893554747104645,
0.1389770805835724,
0.08077725768089294,
-0.05907057970762253,
0.11281655728816986,
0.006821626331657171,
0.02927207015454769,
-0.08843080699443817,
-0.012162255123257637,
-0.06986801326274872,
-0.01855853945016861,
-0.022280799224972725,
-0.0637495145201683,
0.0030984191689640284,
-0.03303023800253868,
-0.02204473689198494,
-0.09070387482643127,
-0.002903599292039871,
0.05644538998603821,
-0.018307248130440712,
0.02586682140827179,
-0.07274880260229111,
-0.040803782641887665,
-0.013778973370790482,
0.0007004036451689899,
-0.15194636583328247,
-0.1347935050725937,
0.0731278657913208,
-0.04353603720664978,
0.0723034217953682,
-0.021372336894273758,
0.007636610418558121,
-0.0157194584608078,
-0.027128955349326134,
0.040836650878190994,
0.015970177948474884,
0.009301437065005302,
-0.05039946734905243,
-0.12459124624729156,
-0.013527327217161655,
-0.051549941301345825,
0.10692065209150314,
-0.12100797891616821,
-0.006660020910203457,
0.05032109096646309,
0.12674634158611298,
0.0610547736287117,
-0.06393107771873474,
0.04690416902303696,
0.006203317549079657,
-0.021166572347283363,
-0.018629688769578934,
-0.024384211748838425,
-0.061091743409633636,
-0.00983855128288269,
0.05187216401100159,
-0.10334881395101547,
-0.10111285001039505,
0.056921057403087616,
0.07858344167470932,
-0.020486127585172653,
-0.05750340223312378,
-0.0531570166349411,
-0.026213523000478745,
-0.10017146170139313,
-0.09973855316638947,
0.06969521939754486,
0.0686676949262619,
0.027980174869298935,
-0.05246460810303688,
-0.017413975670933723,
-0.03894805908203125,
0.019241580739617348,
-0.05861435830593109,
0.06742291897535324,
0.04388745129108429,
-0.13992056250572205,
0.07740096002817154,
-0.020859280601143837,
0.04521264135837555,
0.08225661516189575,
-0.013664156198501587,
-0.14441342651844025,
-0.01561089139431715,
-0.020475255325436592,
0.04068399593234062,
0.0716889351606369,
0.006853243336081505,
0.04613722488284111,
0.08959169685840607,
-0.0028027291409671307,
0.01106642372906208,
-0.062079232186079025,
0.021289845928549767,
0.01146488543599844,
-0.011658738367259502,
-0.03340670466423035,
-0.015394702553749084,
0.05985008925199509,
0.1237313523888588,
0.008107248693704605,
0.08609788119792938,
-0.022798597812652588,
-0.06632419675588608,
-0.1081412211060524,
0.11507244408130646,
-0.11375099420547485,
-0.2187487781047821,
-0.13035252690315247,
0.005214843899011612,
-0.016215624287724495,
-0.015292715281248093,
0.01183917373418808,
-0.011416786350309849,
-0.09658604860305786,
-0.12273115664720535,
-0.00321405753493309,
0.08575713634490967,
-0.09459246695041656,
-0.04421233758330345,
0.056299373507499695,
0.03261891007423401,
-0.11883856356143951,
0.026242446154356003,
0.01883796788752079,
-0.029906246811151505,
0.03983896225690842,
0.036378391087055206,
0.07721361517906189,
0.06746849417686462,
0.031163807958364487,
-0.06142055615782738,
-0.023981541395187378,
0.19600428640842438,
-0.12358389794826508,
0.13543272018432617,
0.1057640016078949,
-0.026893647387623787,
0.051424700766801834,
0.1723242700099945,
0.021923406049609184,
-0.022345133125782013,
0.04484608769416809,
0.07548731565475464,
0.0035910625010728836,
-0.3101222515106201,
-0.06704044342041016,
-0.0554385781288147,
-0.07569108158349991,
0.047524359077215195,
0.037912268191576004,
0.07052069902420044,
0.0066443998366594315,
-0.10642290115356445,
0.002872081473469734,
0.01651272363960743,
0.06478194892406464,
0.10511800646781921,
0.027144726365804672,
0.06622344255447388,
-0.048787087202072144,
0.011475561186671257,
0.0949205756187439,
0.06509733200073242,
0.23184502124786377,
-0.004055618308484554,
0.18123561143875122,
0.04086761921644211,
-0.015495749190449715,
-0.00595754012465477,
0.02358921803534031,
0.0034469040110707283,
0.05447828769683838,
-0.013130600564181805,
-0.06089124083518982,
-0.012151038274168968,
0.10768324881792068,
0.07017720490694046,
-0.05900352820754051,
0.051412325352430344,
-0.042096223682165146,
0.041617218405008316,
0.21701811254024506,
0.06498867273330688,
-0.038671936839818954,
-0.061145760118961334,
0.08666335046291351,
-0.09388060867786407,
-0.09264911711215973,
-0.013334798626601696,
0.11932039260864258,
-0.10782033950090408,
0.024051548913121223,
-0.026865758001804352,
0.09020810574293137,
-0.03445154055953026,
-0.0025346819311380386,
0.005217926576733589,
0.03141968697309494,
-0.02760714292526245,
0.10141119360923767,
-0.195153146982193,
0.14502264559268951,
0.014161271043121815,
0.023740367963910103,
-0.08319345116615295,
0.044620465487241745,
0.0013103085802868009,
-0.04310096427798271,
0.10783112794160843,
0.007136194966733456,
-0.12011798471212387,
0.01583593711256981,
-0.10678789019584656,
0.01697542332112789,
0.08222842961549759,
-0.06899953633546829,
0.10618989169597626,
-0.024368060752749443,
-0.02550983987748623,
-0.029430367052555084,
-0.029433438554406166,
-0.10774547606706619,
-0.20828312635421753,
0.09715349227190018,
-0.0272410549223423,
0.014514457434415817,
-0.048733100295066833,
-0.019661013036966324,
-0.021370042115449905,
0.12813511490821838,
-0.1050686165690422,
-0.061239439994096756,
-0.13872911036014557,
-0.01995103619992733,
0.18118804693222046,
-0.04243683069944382,
0.016485542058944702,
-0.006259395275264978,
0.12141084671020508,
-0.02673068828880787,
-0.0480610728263855,
0.046328574419021606,
-0.04182610288262367,
-0.17936179041862488,
-0.08620329201221466,
0.11119063943624496,
0.0401131808757782,
0.07220453768968582,
-0.005619922187179327,
0.0836198627948761,
-0.021595241501927376,
-0.045607343316078186,
0.0699007660150528,
0.12571847438812256,
0.05536280572414398,
0.0927436500787735,
-0.09934978187084198,
-0.13368342816829681,
-0.12420153617858887,
-0.06375999003648758,
0.09304030239582062,
0.1623130738735199,
-0.05987698957324028,
0.14235906302928925,
0.10149289667606354,
-0.09823694825172424,
-0.18419429659843445,
0.0042756088078022,
0.020919695496559143,
-0.015126723796129227,
0.023405924439430237,
-0.19272856414318085,
0.07414237409830093,
0.08563552051782608,
-0.006054564379155636,
0.06015479564666748,
-0.12181439995765686,
-0.11249858140945435,
-0.012234401889145374,
0.020683085545897484,
-0.11054779589176178,
-0.17290833592414856,
-0.05707177147269249,
-0.03280932828783989,
-0.10479741543531418,
0.12506604194641113,
0.05461845546960831,
0.04059203714132309,
-0.02465539611876011,
0.005845949053764343,
0.027880793437361717,
-0.059793442487716675,
0.12565863132476807,
0.0016734283417463303,
0.03494323045015335,
-0.04949766397476196,
-0.03333648666739464,
0.10768149793148041,
-0.027604129165410995,
0.09723202884197235,
0.012363474816083908,
-0.0011108550243079662,
-0.15474896132946014,
-0.03201606497168541,
-0.066795215010643,
-0.002410275861620903,
-0.06173856183886528,
-0.048553161323070526,
-0.0945395976305008,
0.0918484479188919,
0.09093709290027618,
-0.02243245579302311,
0.08312475681304932,
-0.07583221793174744,
0.047832466661930084,
0.109610915184021,
0.14130334556102753,
0.06949569284915924,
-0.02410433441400528,
0.021891001611948013,
-0.01632307469844818,
0.0076054781675338745,
-0.11333249509334564,
0.05737520009279251,
0.1091444194316864,
0.011749997735023499,
0.13091568648815155,
-0.045881904661655426,
-0.140577495098114,
-0.000842532142996788,
0.03186454996466637,
-0.06293775141239166,
-0.18191149830818176,
0.009365866892039776,
0.1014215350151062,
-0.1342623382806778,
-0.07261256128549576,
0.09853082150220871,
0.011196043342351913,
-0.05376008525490761,
-0.005790382623672485,
0.1158759593963623,
0.048706743866205215,
0.093192458152771,
0.02515995129942894,
0.052964482456445694,
-0.05790345370769501,
0.06551659852266312,
0.18022888898849487,
-0.09365648776292801,
0.043931953608989716,
0.1207585334777832,
-0.028574831783771515,
-0.04459701478481293,
-0.026036061346530914,
-0.046395592391490936,
0.0163575429469347,
-0.01947074569761753,
0.0002564189489930868,
-0.09067533910274506,
0.04690004140138626,
0.15823057293891907,
-0.028468072414398193,
0.03401736170053482,
0.01135503314435482,
-0.04240015894174576,
-0.08921990543603897,
0.13600552082061768,
-0.023184295743703842,
0.048208627849817276,
0.010116863995790482,
0.0023886719718575478,
-0.027655912563204765,
-0.039788197726011276,
0.0022890346590429544,
-0.044960226863622665,
-0.07955902814865112,
-0.025760356336832047,
-0.20974765717983246,
0.0245843306183815,
-0.04154365882277489,
-0.028715532273054123,
0.007707416545599699,
-0.0572291798889637,
0.004975139629095793,
0.002937968820333481,
-0.03605987876653671,
-0.050250597298145294,
-0.021342216059565544,
0.08391529321670532,
-0.16016307473182678,
-0.03038821741938591,
0.0715624988079071,
-0.0642908588051796,
0.1034565269947052,
0.002272391691803932,
-0.0354590080678463,
-0.057750992476940155,
-0.11071909219026566,
-0.0049910941161215305,
-0.04046384245157242,
0.08290992677211761,
0.010527873411774635,
-0.18964536488056183,
0.02099580690264702,
0.01439216360449791,
-0.04830218851566315,
0.032390449196100235,
0.024464521557092667,
-0.07568925619125366,
0.02880440652370453,
-0.013491885736584663,
-0.034112557768821716,
-0.044259391725063324,
0.015088267624378204,
0.0939548909664154,
-0.048635948449373245,
0.11458703875541687,
-0.025636550039052963,
0.07027246803045273,
-0.1383962333202362,
-0.01752113364636898,
-0.01680956780910492,
0.002362525090575218,
-0.026830045506358147,
-0.004470380954444408,
0.053567707538604736,
0.004917343147099018,
0.18790151178836823,
-0.05529587343335152,
-0.015417461283504963,
0.07361645251512527,
-0.02602003701031208,
-0.04965704306960106,
0.06282840669155121,
0.012751702219247818,
0.029103418812155724,
-0.016478747129440308,
0.016696546226739883,
-0.09181779623031616,
-0.034463509917259216,
0.0016726385802030563,
0.1402936577796936,
0.19184179604053497,
0.16550478339195251,
-0.013769522309303284,
0.06542118638753891,
-0.0872001200914383,
-0.04637946933507919,
0.047590672969818115,
-0.02517731860280037,
0.01314966008067131,
-0.06343574821949005,
0.012306476011872292,
0.08552148938179016,
-0.16281697154045105,
0.1190156489610672,
-0.10423284769058228,
-0.04725616052746773,
-0.03359995409846306,
-0.0996086522936821,
-0.06210368126630783,
0.023495804518461227,
-0.019032549113035202,
-0.12031437456607819,
0.06134840101003647,
0.09919007867574692,
0.009317823685705662,
-0.017366427928209305,
0.03880343586206436,
-0.09512555599212646,
-0.03161558508872986,
0.05411972105503082,
0.02929641492664814,
0.04047413915395737,
0.07706456631422043,
0.036102257668972015,
0.016628161072731018,
0.05411319434642792,
0.07009406387805939,
0.0893949568271637,
0.0823240578174591,
-0.005687187425792217,
-0.06501978635787964,
-0.07736371457576752,
0.025628263130784035,
-0.027511123567819595,
-0.02530701272189617,
0.17063426971435547,
0.06160617619752884,
0.01926383376121521,
0.023282533511519432,
0.23120278120040894,
-0.029975146055221558,
-0.11012934893369675,
-0.16450679302215576,
0.13534116744995117,
-0.003818355966359377,
-0.019078750163316727,
0.034591250121593475,
-0.14148619771003723,
0.002838190644979477,
0.13947364687919617,
0.13937830924987793,
-0.04088546335697174,
-0.008912086486816406,
0.029186435043811798,
0.009113837964832783,
0.010666754096746445,
0.07586842775344849,
0.050392650067806244,
0.19874849915504456,
-0.03388179466128349,
0.0085897296667099,
0.004926486872136593,
-0.014295626431703568,
-0.09964152425527573,
0.14129509031772614,
-0.03613222390413284,
-0.02402600273489952,
-0.04872448369860649,
0.05986105278134346,
-0.04218341037631035,
-0.22402432560920715,
0.00805637426674366,
-0.1148170679807663,
-0.14500680565834045,
-0.018046127632260323,
0.01285046711564064,
-0.020153287798166275,
-0.015719611197710037,
0.06619206815958023,
-0.020533088594675064,
0.17013630270957947,
0.02462443895637989,
-0.02094786986708641,
0.009594054892659187,
0.12020772695541382,
-0.13770334422588348,
0.1563694179058075,
0.024696048349142075,
0.07288022339344025,
0.07604914903640747,
-0.03164895251393318,
-0.09495949745178223,
0.06087641045451164,
0.07894335687160492,
-0.034990690648555756,
0.051173143088817596,
0.1938055157661438,
0.01690136454999447,
0.18383902311325073,
0.12222853302955627,
-0.003738513682037592,
0.02799825556576252,
0.008876926265656948,
-0.03615584596991539,
-0.08242419362068176,
0.05449431389570236,
-0.10515707731246948,
0.12036994844675064,
0.13725265860557556,
-0.037778668105602264,
0.017893720418214798,
-0.036479249596595764,
0.006808329373598099,
-0.004039869178086519,
0.13871926069259644,
-0.024942543357610703,
-0.11972644925117493,
0.041402217000722885,
-0.06826168298721313,
0.10587461292743683,
-0.21487095952033997,
-0.06092062219977379,
0.034951724112033844,
-0.039908215403556824,
-0.02378176897764206,
0.09865928441286087,
0.09598241746425629,
-0.005213772878050804,
-0.0635659322142601,
-0.08150927722454071,
0.029987812042236328,
0.1030014306306839,
-0.06778323650360107,
0.019640814512968063
] |
4ba01c71687dd7c996597042449448ea312126cf |
# Dataset Card for Adverse Drug Reaction Data v2
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.sciencedirect.com/science/article/pii/S1532046412000615
- **Repository:** [Needs More Information]
- **Paper:** https://www.sciencedirect.com/science/article/pii/S1532046412000615
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
ADE-Corpus-V2 Dataset: Adverse Drug Reaction Data.
This is a dataset for Classification if a sentence is ADE-related (True) or not (False) and Relation Extraction between Adverse Drug Event and Drug.
DRUG-AE.rel provides relations between drugs and adverse effects.
DRUG-DOSE.rel provides relations between drugs and dosages.
ADE-NEG.txt provides all sentences in the ADE corpus that DO NOT contain any drug-related adverse effects.
### Supported Tasks and Leaderboards
Sentiment classification, Relation Extraction
### Languages
English
## Dataset Structure
### Data Instances
#### Config - `Ade_corpus_v2_classification`
```
{
'label': 1,
'text': 'Intravenous azithromycin-induced ototoxicity.'
}
```
#### Config - `Ade_corpus_v2_drug_ade_relation`
```
{
'drug': 'azithromycin',
'effect': 'ototoxicity',
'indexes': {
'drug': {
'end_char': [24],
'start_char': [12]
},
'effect': {
'end_char': [44],
'start_char': [33]
}
},
'text': 'Intravenous azithromycin-induced ototoxicity.'
}
```
#### Config - `Ade_corpus_v2_drug_dosage_relation`
```
{
'dosage': '4 times per day',
'drug': 'insulin',
'indexes': {
'dosage': {
'end_char': [56],
'start_char': [41]
},
'drug': {
'end_char': [40],
'start_char': [33]}
},
'text': 'She continued to receive regular insulin 4 times per day over the following 3 years with only occasional hives.'
}
```
### Data Fields
#### Config - `Ade_corpus_v2_classification`
- `text` - Input text.
- `label` - Whether the adverse drug effect(ADE) related (1) or not (0).
-
#### Config - `Ade_corpus_v2_drug_ade_relation`
- `text` - Input text.
- `drug` - Name of drug.
- `effect` - Effect caused by the drug.
- `indexes.drug.start_char` - Start index of `drug` string in text.
- `indexes.drug.end_char` - End index of `drug` string in text.
- `indexes.effect.start_char` - Start index of `effect` string in text.
- `indexes.effect.end_char` - End index of `effect` string in text.
#### Config - `Ade_corpus_v2_drug_dosage_relation`
- `text` - Input text.
- `drug` - Name of drug.
- `dosage` - Dosage of the drug.
- `indexes.drug.start_char` - Start index of `drug` string in text.
- `indexes.drug.end_char` - End index of `drug` string in text.
- `indexes.dosage.start_char` - Start index of `dosage` string in text.
- `indexes.dosage.end_char` - End index of `dosage` string in text.
### Data Splits
| Train |
| ------ |
| 23516 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
```
@article{GURULINGAPPA2012885,
title = "Development of a benchmark corpus to support the automatic extraction of drug-related adverse effects from medical case reports",
journal = "Journal of Biomedical Informatics",
volume = "45",
number = "5",
pages = "885 - 892",
year = "2012",
note = "Text Mining and Natural Language Processing in Pharmacogenomics",
issn = "1532-0464",
doi = "https://doi.org/10.1016/j.jbi.2012.04.008",
url = "http://www.sciencedirect.com/science/article/pii/S1532046412000615",
author = "Harsha Gurulingappa and Abdul Mateen Rajput and Angus Roberts and Juliane Fluck and Martin Hofmann-Apitius and Luca Toldo",
keywords = "Adverse drug effect, Benchmark corpus, Annotation, Harmonization, Sentence classification",
abstract = "A significant amount of information about drug-related safety issues such as adverse effects are published in medical case reports that can only be explored by human readers due to their unstructured nature. The work presented here aims at generating a systematically annotated corpus that can support the development and validation of methods for the automatic extraction of drug-related adverse effects from medical case reports. The documents are systematically double annotated in various rounds to ensure consistent annotations. The annotated documents are finally harmonized to generate representative consensus annotations. In order to demonstrate an example use case scenario, the corpus was employed to train and validate models for the classification of informative against the non-informative sentences. A Maximum Entropy classifier trained with simple features and evaluated by 10-fold cross-validation resulted in the F1 score of 0.70 indicating a potential useful application of the corpus."
}
```
### Contributions
Thanks to [@Nilanshrajput](https://github.com/Nilanshrajput), [@lhoestq](https://github.com/lhoestq) for adding this dataset. | ade_corpus_v2 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_ids:coreference-resolution",
"task_ids:fact-checking",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"size_categories:1K<n<10K",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K", "1K<n<10K", "n<1K"], "source_datasets": ["original"], "task_categories": ["text-classification", "token-classification"], "task_ids": ["coreference-resolution", "fact-checking"], "pretty_name": "Adverse Drug Reaction Data v2", "config_names": ["Ade_corpus_v2_classification", "Ade_corpus_v2_drug_ade_relation", "Ade_corpus_v2_drug_dosage_relation"], "dataset_info": [{"config_name": "Ade_corpus_v2_classification", "features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Not-Related", "1": "Related"}}}}], "splits": [{"name": "train", "num_bytes": 3403699, "num_examples": 23516}], "download_size": 1706476, "dataset_size": 3403699}, {"config_name": "Ade_corpus_v2_drug_ade_relation", "features": [{"name": "text", "dtype": "string"}, {"name": "drug", "dtype": "string"}, {"name": "effect", "dtype": "string"}, {"name": "indexes", "struct": [{"name": "drug", "sequence": [{"name": "start_char", "dtype": "int32"}, {"name": "end_char", "dtype": "int32"}]}, {"name": "effect", "sequence": [{"name": "start_char", "dtype": "int32"}, {"name": "end_char", "dtype": "int32"}]}]}], "splits": [{"name": "train", "num_bytes": 1545993, "num_examples": 6821}], "download_size": 491362, "dataset_size": 1545993}, {"config_name": "Ade_corpus_v2_drug_dosage_relation", "features": [{"name": "text", "dtype": "string"}, {"name": "drug", "dtype": "string"}, {"name": "dosage", "dtype": "string"}, {"name": "indexes", "struct": [{"name": "drug", "sequence": [{"name": "start_char", "dtype": "int32"}, {"name": "end_char", "dtype": "int32"}]}, {"name": "dosage", "sequence": [{"name": "start_char", "dtype": "int32"}, {"name": "end_char", "dtype": "int32"}]}]}], "splits": [{"name": "train", "num_bytes": 64697, "num_examples": 279}], "download_size": 33004, "dataset_size": 64697}], "configs": [{"config_name": "Ade_corpus_v2_classification", "data_files": [{"split": "train", "path": "Ade_corpus_v2_classification/train-*"}]}, {"config_name": "Ade_corpus_v2_drug_ade_relation", "data_files": [{"split": "train", "path": "Ade_corpus_v2_drug_ade_relation/train-*"}]}, {"config_name": "Ade_corpus_v2_drug_dosage_relation", "data_files": [{"split": "train", "path": "Ade_corpus_v2_drug_dosage_relation/train-*"}]}], "train-eval-index": [{"config": "Ade_corpus_v2_classification", "task": "text-classification", "task_id": "multi_class_classification", "splits": {"train_split": "train"}, "col_mapping": {"text": "text", "label": "target"}, "metrics": [{"type": "accuracy", "name": "Accuracy"}, {"type": "f1", "name": "F1 macro", "args": {"average": "macro"}}, {"type": "f1", "name": "F1 micro", "args": {"average": "micro"}}, {"type": "f1", "name": "F1 weighted", "args": {"average": "weighted"}}, {"type": "precision", "name": "Precision macro", "args": {"average": "macro"}}, {"type": "precision", "name": "Precision micro", "args": {"average": "micro"}}, {"type": "precision", "name": "Precision weighted", "args": {"average": "weighted"}}, {"type": "recall", "name": "Recall macro", "args": {"average": "macro"}}, {"type": "recall", "name": "Recall micro", "args": {"average": "micro"}}, {"type": "recall", "name": "Recall weighted", "args": {"average": "weighted"}}]}]} | 2024-01-09T11:42:58+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #task_categories-token-classification #task_ids-coreference-resolution #task_ids-fact-checking #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #size_categories-1K<n<10K #size_categories-n<1K #source_datasets-original #language-English #license-unknown #region-us
| Dataset Card for Adverse Drug Reaction Data v2
==============================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: URL
* Repository:
* Paper: URL
* Leaderboard:
* Point of Contact:
### Dataset Summary
ADE-Corpus-V2 Dataset: Adverse Drug Reaction Data.
This is a dataset for Classification if a sentence is ADE-related (True) or not (False) and Relation Extraction between Adverse Drug Event and Drug.
URL provides relations between drugs and adverse effects.
URL provides relations between drugs and dosages.
URL provides all sentences in the ADE corpus that DO NOT contain any drug-related adverse effects.
### Supported Tasks and Leaderboards
Sentiment classification, Relation Extraction
### Languages
English
Dataset Structure
-----------------
### Data Instances
#### Config - 'Ade\_corpus\_v2\_classification'
#### Config - 'Ade\_corpus\_v2\_drug\_ade\_relation'
#### Config - 'Ade\_corpus\_v2\_drug\_dosage\_relation'
### Data Fields
#### Config - 'Ade\_corpus\_v2\_classification'
* 'text' - Input text.
* 'label' - Whether the adverse drug effect(ADE) related (1) or not (0).
*
#### Config - 'Ade\_corpus\_v2\_drug\_ade\_relation'
* 'text' - Input text.
* 'drug' - Name of drug.
* 'effect' - Effect caused by the drug.
* 'URL.start\_char' - Start index of 'drug' string in text.
* 'URL.end\_char' - End index of 'drug' string in text.
* 'URL.start\_char' - Start index of 'effect' string in text.
* 'URL.end\_char' - End index of 'effect' string in text.
#### Config - 'Ade\_corpus\_v2\_drug\_dosage\_relation'
* 'text' - Input text.
* 'drug' - Name of drug.
* 'dosage' - Dosage of the drug.
* 'URL.start\_char' - Start index of 'drug' string in text.
* 'URL.end\_char' - End index of 'drug' string in text.
* 'URL.start\_char' - Start index of 'dosage' string in text.
* 'URL.end\_char' - End index of 'dosage' string in text.
### Data Splits
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @Nilanshrajput, @lhoestq for adding this dataset.
| [
"### Dataset Summary\n\n\nADE-Corpus-V2 Dataset: Adverse Drug Reaction Data.\nThis is a dataset for Classification if a sentence is ADE-related (True) or not (False) and Relation Extraction between Adverse Drug Event and Drug.\nURL provides relations between drugs and adverse effects.\nURL provides relations between drugs and dosages.\nURL provides all sentences in the ADE corpus that DO NOT contain any drug-related adverse effects.",
"### Supported Tasks and Leaderboards\n\n\nSentiment classification, Relation Extraction",
"### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"#### Config - 'Ade\\_corpus\\_v2\\_classification'",
"#### Config - 'Ade\\_corpus\\_v2\\_drug\\_ade\\_relation'",
"#### Config - 'Ade\\_corpus\\_v2\\_drug\\_dosage\\_relation'",
"### Data Fields",
"#### Config - 'Ade\\_corpus\\_v2\\_classification'\n\n\n* 'text' - Input text.\n* 'label' - Whether the adverse drug effect(ADE) related (1) or not (0).\n*",
"#### Config - 'Ade\\_corpus\\_v2\\_drug\\_ade\\_relation'\n\n\n* 'text' - Input text.\n* 'drug' - Name of drug.\n* 'effect' - Effect caused by the drug.\n* 'URL.start\\_char' - Start index of 'drug' string in text.\n* 'URL.end\\_char' - End index of 'drug' string in text.\n* 'URL.start\\_char' - Start index of 'effect' string in text.\n* 'URL.end\\_char' - End index of 'effect' string in text.",
"#### Config - 'Ade\\_corpus\\_v2\\_drug\\_dosage\\_relation'\n\n\n* 'text' - Input text.\n* 'drug' - Name of drug.\n* 'dosage' - Dosage of the drug.\n* 'URL.start\\_char' - Start index of 'drug' string in text.\n* 'URL.end\\_char' - End index of 'drug' string in text.\n* 'URL.start\\_char' - Start index of 'dosage' string in text.\n* 'URL.end\\_char' - End index of 'dosage' string in text.",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\n\nThanks to @Nilanshrajput, @lhoestq for adding this dataset."
] | [
"TAGS\n#task_categories-text-classification #task_categories-token-classification #task_ids-coreference-resolution #task_ids-fact-checking #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #size_categories-1K<n<10K #size_categories-n<1K #source_datasets-original #language-English #license-unknown #region-us \n",
"### Dataset Summary\n\n\nADE-Corpus-V2 Dataset: Adverse Drug Reaction Data.\nThis is a dataset for Classification if a sentence is ADE-related (True) or not (False) and Relation Extraction between Adverse Drug Event and Drug.\nURL provides relations between drugs and adverse effects.\nURL provides relations between drugs and dosages.\nURL provides all sentences in the ADE corpus that DO NOT contain any drug-related adverse effects.",
"### Supported Tasks and Leaderboards\n\n\nSentiment classification, Relation Extraction",
"### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"#### Config - 'Ade\\_corpus\\_v2\\_classification'",
"#### Config - 'Ade\\_corpus\\_v2\\_drug\\_ade\\_relation'",
"#### Config - 'Ade\\_corpus\\_v2\\_drug\\_dosage\\_relation'",
"### Data Fields",
"#### Config - 'Ade\\_corpus\\_v2\\_classification'\n\n\n* 'text' - Input text.\n* 'label' - Whether the adverse drug effect(ADE) related (1) or not (0).\n*",
"#### Config - 'Ade\\_corpus\\_v2\\_drug\\_ade\\_relation'\n\n\n* 'text' - Input text.\n* 'drug' - Name of drug.\n* 'effect' - Effect caused by the drug.\n* 'URL.start\\_char' - Start index of 'drug' string in text.\n* 'URL.end\\_char' - End index of 'drug' string in text.\n* 'URL.start\\_char' - Start index of 'effect' string in text.\n* 'URL.end\\_char' - End index of 'effect' string in text.",
"#### Config - 'Ade\\_corpus\\_v2\\_drug\\_dosage\\_relation'\n\n\n* 'text' - Input text.\n* 'drug' - Name of drug.\n* 'dosage' - Dosage of the drug.\n* 'URL.start\\_char' - Start index of 'drug' string in text.\n* 'URL.end\\_char' - End index of 'drug' string in text.\n* 'URL.start\\_char' - Start index of 'dosage' string in text.\n* 'URL.end\\_char' - End index of 'dosage' string in text.",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\n\nThanks to @Nilanshrajput, @lhoestq for adding this dataset."
] | [
132,
104,
18,
12,
6,
21,
27,
28,
5,
51,
136,
140,
11,
7,
4,
10,
10,
5,
5,
9,
18,
7,
8,
14,
6,
6,
24
] | [
"passage: TAGS\n#task_categories-text-classification #task_categories-token-classification #task_ids-coreference-resolution #task_ids-fact-checking #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #size_categories-1K<n<10K #size_categories-n<1K #source_datasets-original #language-English #license-unknown #region-us \n### Dataset Summary\n\n\nADE-Corpus-V2 Dataset: Adverse Drug Reaction Data.\nThis is a dataset for Classification if a sentence is ADE-related (True) or not (False) and Relation Extraction between Adverse Drug Event and Drug.\nURL provides relations between drugs and adverse effects.\nURL provides relations between drugs and dosages.\nURL provides all sentences in the ADE corpus that DO NOT contain any drug-related adverse effects.### Supported Tasks and Leaderboards\n\n\nSentiment classification, Relation Extraction### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------### Data Instances#### Config - 'Ade\\_corpus\\_v2\\_classification'#### Config - 'Ade\\_corpus\\_v2\\_drug\\_ade\\_relation'#### Config - 'Ade\\_corpus\\_v2\\_drug\\_dosage\\_relation'### Data Fields#### Config - 'Ade\\_corpus\\_v2\\_classification'\n\n\n* 'text' - Input text.\n* 'label' - Whether the adverse drug effect(ADE) related (1) or not (0).\n*"
] | [
-0.008667359128594398,
0.12485706061124802,
-0.007738348096609116,
0.030146650969982147,
0.07019925862550735,
0.05987807363271713,
0.08144393563270569,
0.11720912158489227,
0.08749566972255707,
0.1661887913942337,
0.033322181552648544,
0.024431664496660233,
0.026691745966672897,
0.060511957854032516,
0.015314297750592232,
-0.11759313195943832,
-0.012369972653687,
-0.08554035425186157,
0.0431489534676075,
0.08364362269639969,
0.06951697170734406,
-0.060529377311468124,
0.004230763763189316,
-0.10769378393888474,
0.06693413108587265,
-0.006555920001119375,
-0.024356720969080925,
-0.008913049474358559,
0.05186627805233002,
0.02197670005261898,
0.08064045011997223,
0.03253644332289696,
0.03652336448431015,
-0.32947874069213867,
-0.011625993065536022,
0.04623328149318695,
-0.00814877264201641,
0.035717688500881195,
0.04573554918169975,
-0.1364126354455948,
0.034178271889686584,
-0.19062860310077667,
0.07965323328971863,
0.029104001820087433,
-0.1052924320101738,
-0.19758039712905884,
-0.046853456646203995,
0.04478037357330322,
0.10821212828159332,
0.09783051162958145,
-0.06496594846248627,
0.052346885204315186,
0.0020564876031130552,
0.03766285628080368,
0.1481955200433731,
-0.19576922059059143,
-0.0031415382400155067,
0.026699624955654144,
0.07587728649377823,
0.04829931631684303,
-0.10104445368051529,
0.0007211831398308277,
-0.019732384011149406,
0.017072269693017006,
0.05335111916065216,
-0.05453440174460411,
-0.05450214818120003,
-0.026813214644789696,
-0.10958335548639297,
-0.03789468854665756,
0.14520829916000366,
0.018096964806318283,
-0.03385051712393761,
-0.09791712462902069,
-0.00709430780261755,
-0.04748549312353134,
-0.05522628873586655,
-0.05445471778512001,
0.04657427594065666,
-0.007044835481792688,
0.08928094804286957,
-0.0068482705391943455,
-0.06226284056901932,
-0.0325794443488121,
-0.03771202638745308,
0.06910600513219833,
0.015372461639344692,
-0.04322230443358421,
0.09194716066122055,
0.0624992810189724,
-0.1242162436246872,
-0.08939532190561295,
-0.028235066682100296,
-0.02120649255812168,
-0.15679064393043518,
-0.03883286193013191,
0.03530150279402733,
-0.06264064460992813,
0.1128983423113823,
0.11016101390123367,
-0.02646653540432453,
0.021615181118249893,
-0.10437460988759995,
-0.0003392630605958402,
0.009915315546095371,
0.05205283686518669,
-0.039013706147670746,
-0.11735276132822037,
-0.015543341636657715,
0.02773193269968033,
0.05394865572452545,
0.00010966900299536064,
-0.023310184478759766,
0.07939818501472473,
-0.015231644734740257,
0.06290987879037857,
0.03455089405179024,
0.054132018238306046,
-0.11668892949819565,
-0.017609499394893646,
0.10751131922006607,
-0.11990435421466827,
0.026388319209218025,
0.0701819658279419,
0.03290412947535515,
-0.00816074013710022,
0.06797921657562256,
0.022107532247900963,
-0.0476192831993103,
0.1444290280342102,
-0.07297371327877045,
0.0006919349543750286,
-0.06370551139116287,
-0.11310067772865295,
0.04807514697313309,
0.029873814433813095,
-0.07806594669818878,
-0.029903773218393326,
-0.07922139763832092,
-0.06327834725379944,
-0.015220249071717262,
-0.09591936320066452,
0.030789393931627274,
-0.05617851763963699,
0.0485752634704113,
0.04637281969189644,
0.017282895743846893,
-0.0256248377263546,
-0.0655653178691864,
0.0010297327535226941,
0.006830060388892889,
0.08147748559713364,
0.1319790482521057,
0.01986381597816944,
-0.0696440190076828,
0.0654708594083786,
-0.12484444677829742,
0.12716558575630188,
-0.16878093779087067,
0.08997488766908646,
-0.18057043850421906,
0.004745059181004763,
0.0109071871265769,
-0.04549819976091385,
-0.007843255996704102,
0.1704033613204956,
-0.16620473563671112,
-0.05881602317094803,
0.1705091893672943,
-0.07022818922996521,
-0.16033658385276794,
0.004120958503335714,
-0.016617419198155403,
-0.020152518525719643,
0.0800480991601944,
0.05486345291137695,
0.040613532066345215,
-0.17830254137516022,
-0.09024845063686371,
-0.05535015091300011,
0.06597107648849487,
0.19594773650169373,
0.12613825500011444,
-0.12696921825408936,
0.12718184292316437,
0.0006868113996461034,
-0.024213064461946487,
-0.09214504063129425,
-0.005082231946289539,
-0.05248940363526344,
0.03679639846086502,
0.014459151774644852,
0.02105957455933094,
-0.006555065046995878,
-0.06670288741588593,
-0.006835486739873886,
-0.15085771679878235,
-0.05531596019864082,
0.07182132452726364,
0.009816804900765419,
0.019140444695949554,
-0.08618719130754471,
-0.03887738659977913,
-0.024719497188925743,
0.02934499830007553,
-0.09711085259914398,
-0.14754121005535126,
0.033813294023275375,
0.04567457735538483,
0.11950453370809555,
-0.11108394712209702,
0.0009287979337386787,
0.018057718873023987,
-0.07633891701698303,
0.026177411898970604,
-0.012284817174077034,
0.03909941390156746,
-0.01977347396314144,
-0.2658350467681885,
0.022902250289916992,
-0.058910172432661057,
0.1492796689271927,
-0.13842584192752838,
-0.003945944365113974,
0.16280920803546906,
0.07274619489908218,
0.05959031730890274,
-0.026888573542237282,
0.02913646027445793,
0.004658709280192852,
-0.009790468029677868,
-0.02141808532178402,
0.011255854740738869,
-0.08927929401397705,
-0.09900552034378052,
-0.0028956627938896418,
-0.09275740385055542,
-0.13367155194282532,
0.08064400404691696,
0.06685260683298111,
-0.11201921850442886,
-0.14937624335289001,
-0.03981037437915802,
-0.03871714696288109,
-0.06215712055563927,
-0.10004128515720367,
0.08275756239891052,
0.11380939930677414,
0.05131841450929642,
-0.01913681998848915,
-0.03695083037018776,
-0.03411392495036125,
-0.034267835319042206,
-0.02118501625955105,
0.0724961906671524,
0.011574243195354939,
-0.12408478558063507,
0.06277363002300262,
0.09514554589986801,
-0.00227035628631711,
0.007519151084125042,
0.05133829638361931,
-0.07943309843540192,
-0.09068627655506134,
0.005185307469218969,
0.07491051405668259,
0.043442025780677795,
-0.04874854162335396,
0.05952098220586777,
0.05426284670829773,
-0.014183936640620232,
-0.007646101526916027,
-0.024040117859840393,
0.029438938945531845,
0.011746288277208805,
-0.044896360486745834,
0.020772768184542656,
-0.07031955569982529,
0.011248729191720486,
0.12677021324634552,
0.041569218039512634,
0.043123967945575714,
-0.03612896054983139,
-0.07099331170320511,
-0.12236249446868896,
0.12244630604982376,
-0.1636936068534851,
-0.2416388839483261,
-0.06000053137540817,
-0.06294453144073486,
-0.006084916181862354,
0.04529649764299393,
0.006562951952219009,
-0.05401647090911865,
-0.05916418880224228,
-0.05753854289650917,
0.058814190328121185,
0.10339342057704926,
-0.09385677427053452,
-0.07021581381559372,
0.08914728462696075,
0.042748112231492996,
-0.05966702103614807,
0.0261110570281744,
0.026279600337147713,
-0.022297684103250504,
0.03158333897590637,
0.006760875694453716,
0.06374764442443848,
0.2088010460138321,
0.0940842404961586,
-0.0365917906165123,
-0.04141237586736679,
0.1046382337808609,
-0.1336185485124588,
0.07573395222425461,
-0.04866861552000046,
0.010610833764076233,
0.024722496047616005,
0.21890567243099213,
0.026505835354328156,
-0.07291266322135925,
0.054824262857437134,
0.1038922592997551,
-0.018047789111733437,
-0.2995365262031555,
-0.10535885393619537,
-0.07397256791591644,
-0.0730050653219223,
0.06223852187395096,
0.04884754866361618,
0.045831311494112015,
0.038414131850004196,
-0.056489549577236176,
-0.004160021431744099,
0.07202433794736862,
0.09959054738283157,
0.1584041863679886,
0.0036316667683422565,
0.10766086727380753,
-0.03665262460708618,
-0.025800898671150208,
0.05744153633713722,
0.10205074399709702,
0.2017306685447693,
0.015313224866986275,
0.22237609326839447,
0.09463006258010864,
-0.010045376606285572,
-0.03342759609222412,
-0.06473647058010101,
0.023382212966680527,
0.03962484747171402,
-0.03393566235899925,
-0.1288514882326126,
-0.01613321155309677,
0.15198186039924622,
0.021957028657197952,
-0.07503305375576019,
0.03189082816243172,
-0.03753507882356644,
0.11565155535936356,
0.055104274302721024,
0.0967736467719078,
-0.13647302985191345,
0.019267570227384567,
0.10874831676483154,
0.003206634894013405,
-0.024244580417871475,
0.00926799327135086,
0.02547791600227356,
-0.07436410337686539,
0.16714879870414734,
-0.018912987783551216,
0.11881133168935776,
-0.07471100240945816,
-0.0002521117276046425,
0.04631678760051727,
-0.05474313348531723,
-0.029119525104761124,
0.09979373961687088,
-0.2229013890028,
0.18029262125492096,
0.028322558850049973,
-0.021981079131364822,
-0.015110837295651436,
0.06993424147367477,
-0.023389780893921852,
0.09059598296880722,
0.21194083988666534,
-0.0029547936283051968,
0.02040359005331993,
-0.04230993613600731,
-0.10978031903505325,
-0.06779977679252625,
0.07113493978977203,
-0.11945126205682755,
0.095166876912117,
0.01892540417611599,
-0.042667388916015625,
-0.021948205307126045,
0.1402515321969986,
-0.08480668812990189,
-0.1700989305973053,
0.09922857582569122,
-0.05234818533062935,
0.05015965551137924,
0.0070052738301455975,
-0.02718188799917698,
-0.09099173545837402,
0.18104305863380432,
-0.23753750324249268,
-0.0859321728348732,
-0.10928037017583847,
-0.021008992567658424,
0.11844157427549362,
-0.05314786359667778,
0.0025808687787503004,
0.0066454303450882435,
0.06757792085409164,
-0.025747856125235558,
-0.015948763117194176,
0.09072380512952805,
-0.055596400052309036,
-0.17640194296836853,
-0.13451306521892548,
0.20574824512004852,
-0.001741635031066835,
0.06625467538833618,
-0.010061749257147312,
0.1082347109913826,
0.02168005332350731,
-0.08609206229448318,
0.09598901867866516,
0.02815825119614601,
0.052348893135786057,
0.09421531856060028,
-0.09257005155086517,
-0.052421532571315765,
-0.14112484455108643,
-0.061460718512535095,
0.07395796477794647,
0.25725218653678894,
-0.08949349820613861,
0.1040099486708641,
0.028183866292238235,
-0.1725279688835144,
-0.16650670766830444,
-0.05258263647556305,
0.07176613807678223,
-0.04656623303890228,
0.03685178980231285,
-0.1699116826057434,
0.02746625989675522,
0.08641652762889862,
-0.02576366811990738,
-0.006413893308490515,
-0.19948430359363556,
-0.16450431942939758,
0.06859001517295837,
-0.033135831356048584,
-0.2745397090911865,
-0.22299520671367645,
-0.13908657431602478,
-0.05996933579444885,
-0.023771902546286583,
0.1706136167049408,
0.007852020673453808,
0.04469405114650726,
0.004169338848441839,
-0.011753708124160767,
0.03262801095843315,
-0.03057512454688549,
0.13849054276943207,
0.08481722325086594,
0.032404690980911255,
-0.03960562124848366,
-0.014360388740897179,
0.07656902819871902,
-0.007432133890688419,
0.05335118621587753,
0.0413535051047802,
-0.009853453375399113,
-0.10597602277994156,
-0.005649182014167309,
-0.034871265292167664,
0.030696896836161613,
-0.07948654145002365,
-0.06364180147647858,
-0.09841509163379669,
0.08215449750423431,
0.06457365304231644,
-0.047836095094680786,
0.17456243932247162,
-0.12142389267683029,
0.11886532604694366,
0.18919840455055237,
0.08872553706169128,
0.06771308928728104,
-0.16170969605445862,
0.07221852242946625,
-0.0027203187346458435,
0.022754130885004997,
-0.11904843151569366,
0.0734817311167717,
0.15413524210453033,
0.04189649969339371,
0.1940719336271286,
-0.02884780243039131,
-0.10796622186899185,
-0.06156044080853462,
0.12684780359268188,
-0.07464680075645447,
-0.1497882753610611,
0.0034737938549369574,
0.01069253496825695,
-0.10058939456939697,
-0.09612266719341278,
0.1560242474079132,
-0.009097988717257977,
-0.0070046898908913136,
0.04558857902884483,
0.10563871264457703,
0.0320730023086071,
0.18868526816368103,
0.015145632438361645,
0.04918021336197853,
-0.06394381821155548,
0.028822168707847595,
0.2248784452676773,
-0.14242495596408844,
0.036353908479213715,
0.1612941324710846,
-0.056086521595716476,
-0.0635184571146965,
-0.05857919156551361,
0.026388399302959442,
0.04980340227484703,
0.007172805722802877,
0.026016969233751297,
-0.08000217378139496,
0.04979221522808075,
0.026656394824385643,
-0.010119209997355938,
0.10282088071107864,
0.04579624906182289,
-0.03939499706029892,
-0.016969144344329834,
0.18225343525409698,
-0.05482294037938118,
0.04025012627243996,
-0.021198131144046783,
0.04030375927686691,
0.01622943766415119,
-0.04746723547577858,
-0.023508090525865555,
0.006704019848257303,
-0.09528855979442596,
-0.012544958852231503,
-0.10087985545396805,
0.10145103931427002,
-0.06601116806268692,
-0.025152098387479782,
0.0029130850452929735,
0.009457092732191086,
0.002755765337496996,
-0.039061568677425385,
0.005237208679318428,
-0.047774363309144974,
0.00938421767205,
0.12284879386425018,
-0.19513097405433655,
-0.053756874054670334,
0.08233978599309921,
-0.0472034327685833,
0.08631057292222977,
-0.03196395933628082,
0.009978174231946468,
-0.05694609507918358,
-0.053940754383802414,
0.12158641964197159,
0.0645754486322403,
0.10770603269338608,
-0.0027153361588716507,
-0.23774465918540955,
0.006253752391785383,
-0.04077106714248657,
-0.06381864100694656,
0.06631357967853546,
0.00782972015440464,
-0.13252969086170197,
-0.03851688280701637,
-0.04434460774064064,
-0.046525340527296066,
-0.0030001180712133646,
-0.0032843314111232758,
0.023650119081139565,
-0.030229734256863594,
0.09093913435935974,
-0.08537473529577255,
0.11405370384454727,
-0.14322739839553833,
-0.02905214950442314,
0.006095952820032835,
0.04249056428670883,
-0.09667039662599564,
0.023470774292945862,
0.060930781066417694,
-0.058045171201229095,
0.14228639006614685,
-0.05233810096979141,
0.08137853443622589,
0.08680304139852524,
0.04848618060350418,
0.0006559748435392976,
0.04339311271905899,
-0.1042231097817421,
0.03331718221306801,
-0.057582877576351166,
-0.029377106577157974,
-0.03916557505726814,
-0.01384858600795269,
-0.09538000077009201,
0.10179600864648819,
0.08464575558900833,
0.21390193700790405,
-0.058436762541532516,
0.06291782855987549,
-0.14659801125526428,
-0.06954207271337509,
0.1446169912815094,
0.028785020112991333,
0.04052259027957916,
-0.022823089733719826,
-0.012183932587504387,
0.11221802234649658,
-0.19619561731815338,
0.165702685713768,
-0.06786605715751648,
-0.09925930947065353,
-0.03381889685988426,
-0.09801504015922546,
-0.0652601346373558,
-0.06271274387836456,
-0.014460422098636627,
-0.14831751585006714,
0.0007774689584039152,
0.031806569546461105,
-0.010495270602405071,
0.020679214969277382,
0.1331341564655304,
-0.11311180889606476,
-0.0814332515001297,
0.0347571074962616,
0.07190106064081192,
0.00901678204536438,
-0.01156864408403635,
0.09658807516098022,
0.04397829622030258,
0.06892459839582443,
0.012178529985249043,
0.10399214178323746,
0.06799370050430298,
-0.008336269296705723,
-0.05967184528708458,
-0.053932320326566696,
0.03946582227945328,
0.012702359817922115,
-0.03635676950216293,
0.1755707561969757,
0.02430794946849346,
-0.007377673871815205,
0.01371246948838234,
0.20951120555400848,
-0.07481364160776138,
-0.10392478108406067,
-0.09413225948810577,
0.15373015403747559,
0.028791503980755806,
0.07530971616506577,
-0.037725236266851425,
-0.09650464355945587,
-0.006085993722081184,
0.08521761000156403,
0.1894000768661499,
-0.05184115096926689,
0.02176550403237343,
-0.009313599206507206,
0.028230037540197372,
0.06672170758247375,
-0.027612369507551193,
0.0600898340344429,
0.21121326088905334,
-0.04477979615330696,
0.02590985782444477,
-0.01963023468852043,
0.013288969174027443,
-0.0386001318693161,
0.09015051275491714,
0.014035090804100037,
0.020843803882598877,
-0.016269899904727936,
0.10755011439323425,
-0.05608364939689636,
-0.16880781948566437,
0.0011710432590916753,
-0.14847861230373383,
-0.12478772550821304,
-0.03552830591797829,
-0.006469758227467537,
0.002730486448854208,
-0.03336937725543976,
0.10156414657831192,
-0.06051454320549965,
0.15742377936840057,
-0.0005208723014220595,
-0.04709584638476372,
-0.023987647145986557,
0.07240138947963715,
-0.07318586111068726,
0.14341403543949127,
-0.0029749420937150717,
0.13347527384757996,
0.08421099931001663,
-0.03263872116804123,
-0.09756042808294296,
0.0946163609623909,
0.06817562133073807,
-0.022830678150057793,
0.08611643314361572,
0.1115301176905632,
0.03383760526776314,
0.17530380189418793,
0.13968175649642944,
-0.037841182202100754,
0.011444210074841976,
0.028677845373749733,
-0.02738562971353531,
-0.06178818270564079,
0.10842763632535934,
-0.13414224982261658,
0.050422150641679764,
0.11265265941619873,
-0.03842839598655701,
0.0167399849742651,
-0.017827991396188736,
0.00491039315238595,
-0.020228084176778793,
0.18422934412956238,
0.009426075965166092,
-0.10986433923244476,
0.07213643193244934,
-0.04233487695455551,
0.08804106712341309,
-0.28725466132164,
-0.11443742364645004,
0.06644357740879059,
0.004356966353952885,
-0.047622933983802795,
0.15049263834953308,
0.03674926236271858,
-0.00584497069939971,
-0.09227675199508667,
-0.23489928245544434,
0.04686820134520531,
0.10557729750871658,
-0.09014006704092026,
-0.020316027104854584
] |
c2d5f738db1ad21a4126a144dfbb00cb51e0a4a9 |
# Dataset Card for adversarialQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [adversarialQA homepage](https://adversarialqa.github.io/)
- **Repository:** [adversarialQA repository](https://github.com/maxbartolo/adversarialQA)
- **Paper:** [Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension](https://arxiv.org/abs/2002.00293)
- **Leaderboard:** [Dynabench QA Round 1 Leaderboard](https://dynabench.org/tasks/2#overall)
- **Point of Contact:** [Max Bartolo](max.bartolo@ucl.ac.uk)
### Dataset Summary
We have created three new Reading Comprehension datasets constructed using an adversarial model-in-the-loop.
We use three different models; BiDAF (Seo et al., 2016), BERTLarge (Devlin et al., 2018), and RoBERTaLarge (Liu et al., 2019) in the annotation loop and construct three datasets; D(BiDAF), D(BERT), and D(RoBERTa), each with 10,000 training examples, 1,000 validation, and 1,000 test examples.
The adversarial human annotation paradigm ensures that these datasets consist of questions that current state-of-the-art models (at least the ones used as adversaries in the annotation loop) find challenging. The three AdversarialQA round 1 datasets provide a training and evaluation resource for such methods.
### Supported Tasks and Leaderboards
`extractive-qa`: The dataset can be used to train a model for Extractive Question Answering, which consists in selecting the answer to a question from a passage. Success on this task is typically measured by achieving a high word-overlap [F1 score](https://huggingface.co/metrics/f1). The [RoBERTa-Large](https://huggingface.co/roberta-large) model trained on all the data combined with [SQuAD](https://arxiv.org/abs/1606.05250) currently achieves 64.35% F1. This task has an active leaderboard and is available as round 1 of the QA task on [Dynabench](https://dynabench.org/tasks/2#overall) and ranks models based on F1 score.
### Languages
The text in the dataset is in English. The associated BCP-47 code is `en`.
## Dataset Structure
### Data Instances
Data is provided in the same format as SQuAD 1.1. An example is shown below:
```
{
"data": [
{
"title": "Oxygen",
"paragraphs": [
{
"context": "Among the most important classes of organic compounds that contain oxygen are (where \"R\" is an organic group): alcohols (R-OH); ethers (R-O-R); ketones (R-CO-R); aldehydes (R-CO-H); carboxylic acids (R-COOH); esters (R-COO-R); acid anhydrides (R-CO-O-CO-R); and amides (R-C(O)-NR2). There are many important organic solvents that contain oxygen, including: acetone, methanol, ethanol, isopropanol, furan, THF, diethyl ether, dioxane, ethyl acetate, DMF, DMSO, acetic acid, and formic acid. Acetone ((CH3)2CO) and phenol (C6H5OH) are used as feeder materials in the synthesis of many different substances. Other important organic compounds that contain oxygen are: glycerol, formaldehyde, glutaraldehyde, citric acid, acetic anhydride, and acetamide. Epoxides are ethers in which the oxygen atom is part of a ring of three atoms.",
"qas": [
{
"id": "22bbe104aa72aa9b511dd53237deb11afa14d6e3",
"question": "In addition to having oxygen, what do alcohols, ethers and esters have in common, according to the article?",
"answers": [
{
"answer_start": 36,
"text": "organic compounds"
}
]
},
{
"id": "4240a8e708c703796347a3702cf1463eed05584a",
"question": "What letter does the abbreviation for acid anhydrides both begin and end in?",
"answers": [
{
"answer_start": 244,
"text": "R"
}
]
},
{
"id": "0681a0a5ec852ec6920d6a30f7ef65dced493366",
"question": "Which of the organic compounds, in the article, contains nitrogen?",
"answers": [
{
"answer_start": 262,
"text": "amides"
}
]
},
{
"id": "2990efe1a56ccf81938fa5e18104f7d3803069fb",
"question": "Which of the important classes of organic compounds, in the article, has a number in its abbreviation?",
"answers": [
{
"answer_start": 262,
"text": "amides"
}
]
}
]
}
]
}
]
}
```
### Data Fields
- title: the title of the Wikipedia page from which the context is sourced
- context: the context/passage
- id: a string identifier for each question
- answers: a list of all provided answers (one per question in our case, but multiple may exist in SQuAD) with an `answer_start` field which is the character index of the start of the answer span, and a `text` field which is the answer text.
Note that no answers are provided in the test set. Indeed, this dataset is part of the DynaBench benchmark, for which you can submit your predictions on the [website](https://dynabench.org/tasks/2#1).
### Data Splits
The dataset is composed of three different datasets constructed using different models in the loop: BiDAF, BERT-Large, and RoBERTa-Large. Each of these has 10,000 training examples, 1,000 validation examples, and 1,000 test examples for a total of 30,000/3,000/3,000 train/validation/test examples.
## Dataset Creation
### Curation Rationale
This dataset was collected to provide a more challenging and diverse Reading Comprehension dataset to state-of-the-art models.
### Source Data
#### Initial Data Collection and Normalization
The source passages are from Wikipedia and are the same as those used in [SQuAD v1.1](https://arxiv.org/abs/1606.05250).
#### Who are the source language producers?
The source language produces are Wikipedia editors for the passages, and human annotators on Mechanical Turk for the questions.
### Annotations
#### Annotation process
The dataset is collected through an adversarial human annotation process which pairs a human annotator and a reading comprehension model in an interactive setting. The human is presented with a passage for which they write a question and highlight the correct answer. The model then tries to answer the question, and, if it fails to answer correctly, the human wins. Otherwise, the human modifies or re-writes their question until the successfully fool the model.
#### Who are the annotators?
The annotators are from Amazon Mechanical Turk, geographically restricted the the USA, UK and Canada, having previously successfully completed at least 1,000 HITs, and having a HIT approval rate greater than 98%. Crowdworkers undergo intensive training and qualification prior to annotation.
### Personal and Sensitive Information
No annotator identifying details are provided.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop better question answering systems.
A system that succeeds at the supported task would be able to provide an accurate extractive answer from a short passage. This dataset is to be seen as a test bed for questions which contemporary state-of-the-art models struggle to answer correctly, thus often requiring more complex comprehension abilities than say detecting phrases explicitly mentioned in the passage with high overlap to the question.
It should be noted, however, that the the source passages are both domain-restricted and linguistically specific, and that provided questions and answers do not constitute any particular social application.
### Discussion of Biases
The dataset may exhibit various biases in terms of the source passage selection, annotated questions and answers, as well as algorithmic biases resulting from the adversarial annotation protocol.
### Other Known Limitations
N/a
## Additional Information
### Dataset Curators
This dataset was initially created by Max Bartolo, Alastair Roberts, Johannes Welbl, Sebastian Riedel, and Pontus Stenetorp, during work carried out at University College London (UCL).
### Licensing Information
This dataset is distributed under [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/).
### Citation Information
```
@article{bartolo2020beat,
author = {Bartolo, Max and Roberts, Alastair and Welbl, Johannes and Riedel, Sebastian and Stenetorp, Pontus},
title = {Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension},
journal = {Transactions of the Association for Computational Linguistics},
volume = {8},
number = {},
pages = {662-678},
year = {2020},
doi = {10.1162/tacl\_a\_00338},
URL = { https://doi.org/10.1162/tacl_a_00338 },
eprint = { https://doi.org/10.1162/tacl_a_00338 },
abstract = { Innovations in annotation methodology have been a catalyst for Reading Comprehension (RC) datasets and models. One recent trend to challenge current RC models is to involve a model in the annotation process: Humans create questions adversarially, such that the model fails to answer them correctly. In this work we investigate this annotation methodology and apply it in three different settings, collecting a total of 36,000 samples with progressively stronger models in the annotation loop. This allows us to explore questions such as the reproducibility of the adversarial effect, transfer from data collected with varying model-in-the-loop strengths, and generalization to data collected without a model. We find that training on adversarially collected samples leads to strong generalization to non-adversarially collected datasets, yet with progressive performance deterioration with increasingly stronger models-in-the-loop. Furthermore, we find that stronger models can still learn from datasets collected with substantially weaker models-in-the-loop. When trained on data collected with a BiDAF model in the loop, RoBERTa achieves 39.9F1 on questions that it cannot answer when trained on SQuAD—only marginally lower than when trained on data collected using RoBERTa itself (41.0F1). }
}
```
### Contributions
Thanks to [@maxbartolo](https://github.com/maxbartolo) for adding this dataset. | UCLNLP/adversarial_qa | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"task_ids:open-domain-qa",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"arxiv:2002.00293",
"arxiv:1606.05250",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa", "open-domain-qa"], "paperswithcode_id": "adversarialqa", "pretty_name": "adversarialQA", "dataset_info": [{"config_name": "adversarialQA", "features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "metadata", "struct": [{"name": "split", "dtype": "string"}, {"name": "model_in_the_loop", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 27858686, "num_examples": 30000}, {"name": "validation", "num_bytes": 2757092, "num_examples": 3000}, {"name": "test", "num_bytes": 2919479, "num_examples": 3000}], "download_size": 5301049, "dataset_size": 33535257}, {"config_name": "dbert", "features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "metadata", "struct": [{"name": "split", "dtype": "string"}, {"name": "model_in_the_loop", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 9345521, "num_examples": 10000}, {"name": "validation", "num_bytes": 918156, "num_examples": 1000}, {"name": "test", "num_bytes": 971290, "num_examples": 1000}], "download_size": 2689032, "dataset_size": 11234967}, {"config_name": "dbidaf", "features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "metadata", "struct": [{"name": "split", "dtype": "string"}, {"name": "model_in_the_loop", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 9282482, "num_examples": 10000}, {"name": "validation", "num_bytes": 917907, "num_examples": 1000}, {"name": "test", "num_bytes": 946947, "num_examples": 1000}], "download_size": 2721341, "dataset_size": 11147336}, {"config_name": "droberta", "features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "metadata", "struct": [{"name": "split", "dtype": "string"}, {"name": "model_in_the_loop", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 9270683, "num_examples": 10000}, {"name": "validation", "num_bytes": 925029, "num_examples": 1000}, {"name": "test", "num_bytes": 1005242, "num_examples": 1000}], "download_size": 2815452, "dataset_size": 11200954}], "configs": [{"config_name": "adversarialQA", "data_files": [{"split": "train", "path": "adversarialQA/train-*"}, {"split": "validation", "path": "adversarialQA/validation-*"}, {"split": "test", "path": "adversarialQA/test-*"}]}, {"config_name": "dbert", "data_files": [{"split": "train", "path": "dbert/train-*"}, {"split": "validation", "path": "dbert/validation-*"}, {"split": "test", "path": "dbert/test-*"}]}, {"config_name": "dbidaf", "data_files": [{"split": "train", "path": "dbidaf/train-*"}, {"split": "validation", "path": "dbidaf/validation-*"}, {"split": "test", "path": "dbidaf/test-*"}]}, {"config_name": "droberta", "data_files": [{"split": "train", "path": "droberta/train-*"}, {"split": "validation", "path": "droberta/validation-*"}, {"split": "test", "path": "droberta/test-*"}]}], "train-eval-index": [{"config": "adversarialQA", "task": "question-answering", "task_id": "extractive_question_answering", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"question": "question", "context": "context", "answers": {"text": "text", "answer_start": "answer_start"}}, "metrics": [{"type": "squad", "name": "SQuAD"}]}]} | 2023-12-21T14:20:00+00:00 | [
"2002.00293",
"1606.05250"
] | [
"en"
] | TAGS
#task_categories-question-answering #task_ids-extractive-qa #task_ids-open-domain-qa #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-4.0 #arxiv-2002.00293 #arxiv-1606.05250 #region-us
|
# Dataset Card for adversarialQA
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: adversarialQA homepage
- Repository: adversarialQA repository
- Paper: Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension
- Leaderboard: Dynabench QA Round 1 Leaderboard
- Point of Contact: Max Bartolo
### Dataset Summary
We have created three new Reading Comprehension datasets constructed using an adversarial model-in-the-loop.
We use three different models; BiDAF (Seo et al., 2016), BERTLarge (Devlin et al., 2018), and RoBERTaLarge (Liu et al., 2019) in the annotation loop and construct three datasets; D(BiDAF), D(BERT), and D(RoBERTa), each with 10,000 training examples, 1,000 validation, and 1,000 test examples.
The adversarial human annotation paradigm ensures that these datasets consist of questions that current state-of-the-art models (at least the ones used as adversaries in the annotation loop) find challenging. The three AdversarialQA round 1 datasets provide a training and evaluation resource for such methods.
### Supported Tasks and Leaderboards
'extractive-qa': The dataset can be used to train a model for Extractive Question Answering, which consists in selecting the answer to a question from a passage. Success on this task is typically measured by achieving a high word-overlap F1 score. The RoBERTa-Large model trained on all the data combined with SQuAD currently achieves 64.35% F1. This task has an active leaderboard and is available as round 1 of the QA task on Dynabench and ranks models based on F1 score.
### Languages
The text in the dataset is in English. The associated BCP-47 code is 'en'.
## Dataset Structure
### Data Instances
Data is provided in the same format as SQuAD 1.1. An example is shown below:
### Data Fields
- title: the title of the Wikipedia page from which the context is sourced
- context: the context/passage
- id: a string identifier for each question
- answers: a list of all provided answers (one per question in our case, but multiple may exist in SQuAD) with an 'answer_start' field which is the character index of the start of the answer span, and a 'text' field which is the answer text.
Note that no answers are provided in the test set. Indeed, this dataset is part of the DynaBench benchmark, for which you can submit your predictions on the website.
### Data Splits
The dataset is composed of three different datasets constructed using different models in the loop: BiDAF, BERT-Large, and RoBERTa-Large. Each of these has 10,000 training examples, 1,000 validation examples, and 1,000 test examples for a total of 30,000/3,000/3,000 train/validation/test examples.
## Dataset Creation
### Curation Rationale
This dataset was collected to provide a more challenging and diverse Reading Comprehension dataset to state-of-the-art models.
### Source Data
#### Initial Data Collection and Normalization
The source passages are from Wikipedia and are the same as those used in SQuAD v1.1.
#### Who are the source language producers?
The source language produces are Wikipedia editors for the passages, and human annotators on Mechanical Turk for the questions.
### Annotations
#### Annotation process
The dataset is collected through an adversarial human annotation process which pairs a human annotator and a reading comprehension model in an interactive setting. The human is presented with a passage for which they write a question and highlight the correct answer. The model then tries to answer the question, and, if it fails to answer correctly, the human wins. Otherwise, the human modifies or re-writes their question until the successfully fool the model.
#### Who are the annotators?
The annotators are from Amazon Mechanical Turk, geographically restricted the the USA, UK and Canada, having previously successfully completed at least 1,000 HITs, and having a HIT approval rate greater than 98%. Crowdworkers undergo intensive training and qualification prior to annotation.
### Personal and Sensitive Information
No annotator identifying details are provided.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop better question answering systems.
A system that succeeds at the supported task would be able to provide an accurate extractive answer from a short passage. This dataset is to be seen as a test bed for questions which contemporary state-of-the-art models struggle to answer correctly, thus often requiring more complex comprehension abilities than say detecting phrases explicitly mentioned in the passage with high overlap to the question.
It should be noted, however, that the the source passages are both domain-restricted and linguistically specific, and that provided questions and answers do not constitute any particular social application.
### Discussion of Biases
The dataset may exhibit various biases in terms of the source passage selection, annotated questions and answers, as well as algorithmic biases resulting from the adversarial annotation protocol.
### Other Known Limitations
N/a
## Additional Information
### Dataset Curators
This dataset was initially created by Max Bartolo, Alastair Roberts, Johannes Welbl, Sebastian Riedel, and Pontus Stenetorp, during work carried out at University College London (UCL).
### Licensing Information
This dataset is distributed under CC BY-SA 3.0.
### Contributions
Thanks to @maxbartolo for adding this dataset. | [
"# Dataset Card for adversarialQA",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: adversarialQA homepage\n- Repository: adversarialQA repository\n- Paper: Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension\n- Leaderboard: Dynabench QA Round 1 Leaderboard\n- Point of Contact: Max Bartolo",
"### Dataset Summary\n\nWe have created three new Reading Comprehension datasets constructed using an adversarial model-in-the-loop.\n\nWe use three different models; BiDAF (Seo et al., 2016), BERTLarge (Devlin et al., 2018), and RoBERTaLarge (Liu et al., 2019) in the annotation loop and construct three datasets; D(BiDAF), D(BERT), and D(RoBERTa), each with 10,000 training examples, 1,000 validation, and 1,000 test examples.\n\nThe adversarial human annotation paradigm ensures that these datasets consist of questions that current state-of-the-art models (at least the ones used as adversaries in the annotation loop) find challenging. The three AdversarialQA round 1 datasets provide a training and evaluation resource for such methods.",
"### Supported Tasks and Leaderboards\n\n'extractive-qa': The dataset can be used to train a model for Extractive Question Answering, which consists in selecting the answer to a question from a passage. Success on this task is typically measured by achieving a high word-overlap F1 score. The RoBERTa-Large model trained on all the data combined with SQuAD currently achieves 64.35% F1. This task has an active leaderboard and is available as round 1 of the QA task on Dynabench and ranks models based on F1 score.",
"### Languages\n\nThe text in the dataset is in English. The associated BCP-47 code is 'en'.",
"## Dataset Structure",
"### Data Instances\n\nData is provided in the same format as SQuAD 1.1. An example is shown below:",
"### Data Fields\n\n- title: the title of the Wikipedia page from which the context is sourced\n- context: the context/passage\n- id: a string identifier for each question\n- answers: a list of all provided answers (one per question in our case, but multiple may exist in SQuAD) with an 'answer_start' field which is the character index of the start of the answer span, and a 'text' field which is the answer text.\n\nNote that no answers are provided in the test set. Indeed, this dataset is part of the DynaBench benchmark, for which you can submit your predictions on the website.",
"### Data Splits\n\nThe dataset is composed of three different datasets constructed using different models in the loop: BiDAF, BERT-Large, and RoBERTa-Large. Each of these has 10,000 training examples, 1,000 validation examples, and 1,000 test examples for a total of 30,000/3,000/3,000 train/validation/test examples.",
"## Dataset Creation",
"### Curation Rationale\n\nThis dataset was collected to provide a more challenging and diverse Reading Comprehension dataset to state-of-the-art models.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nThe source passages are from Wikipedia and are the same as those used in SQuAD v1.1.",
"#### Who are the source language producers?\n\nThe source language produces are Wikipedia editors for the passages, and human annotators on Mechanical Turk for the questions.",
"### Annotations",
"#### Annotation process\n\nThe dataset is collected through an adversarial human annotation process which pairs a human annotator and a reading comprehension model in an interactive setting. The human is presented with a passage for which they write a question and highlight the correct answer. The model then tries to answer the question, and, if it fails to answer correctly, the human wins. Otherwise, the human modifies or re-writes their question until the successfully fool the model.",
"#### Who are the annotators?\n\nThe annotators are from Amazon Mechanical Turk, geographically restricted the the USA, UK and Canada, having previously successfully completed at least 1,000 HITs, and having a HIT approval rate greater than 98%. Crowdworkers undergo intensive training and qualification prior to annotation.",
"### Personal and Sensitive Information\n\nNo annotator identifying details are provided.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe purpose of this dataset is to help develop better question answering systems.\n\nA system that succeeds at the supported task would be able to provide an accurate extractive answer from a short passage. This dataset is to be seen as a test bed for questions which contemporary state-of-the-art models struggle to answer correctly, thus often requiring more complex comprehension abilities than say detecting phrases explicitly mentioned in the passage with high overlap to the question.\n\nIt should be noted, however, that the the source passages are both domain-restricted and linguistically specific, and that provided questions and answers do not constitute any particular social application.",
"### Discussion of Biases\n\nThe dataset may exhibit various biases in terms of the source passage selection, annotated questions and answers, as well as algorithmic biases resulting from the adversarial annotation protocol.",
"### Other Known Limitations\n\nN/a",
"## Additional Information",
"### Dataset Curators\n\nThis dataset was initially created by Max Bartolo, Alastair Roberts, Johannes Welbl, Sebastian Riedel, and Pontus Stenetorp, during work carried out at University College London (UCL).",
"### Licensing Information\n\nThis dataset is distributed under CC BY-SA 3.0.",
"### Contributions\n\nThanks to @maxbartolo for adding this dataset."
] | [
"TAGS\n#task_categories-question-answering #task_ids-extractive-qa #task_ids-open-domain-qa #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-4.0 #arxiv-2002.00293 #arxiv-1606.05250 #region-us \n",
"# Dataset Card for adversarialQA",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: adversarialQA homepage\n- Repository: adversarialQA repository\n- Paper: Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension\n- Leaderboard: Dynabench QA Round 1 Leaderboard\n- Point of Contact: Max Bartolo",
"### Dataset Summary\n\nWe have created three new Reading Comprehension datasets constructed using an adversarial model-in-the-loop.\n\nWe use three different models; BiDAF (Seo et al., 2016), BERTLarge (Devlin et al., 2018), and RoBERTaLarge (Liu et al., 2019) in the annotation loop and construct three datasets; D(BiDAF), D(BERT), and D(RoBERTa), each with 10,000 training examples, 1,000 validation, and 1,000 test examples.\n\nThe adversarial human annotation paradigm ensures that these datasets consist of questions that current state-of-the-art models (at least the ones used as adversaries in the annotation loop) find challenging. The three AdversarialQA round 1 datasets provide a training and evaluation resource for such methods.",
"### Supported Tasks and Leaderboards\n\n'extractive-qa': The dataset can be used to train a model for Extractive Question Answering, which consists in selecting the answer to a question from a passage. Success on this task is typically measured by achieving a high word-overlap F1 score. The RoBERTa-Large model trained on all the data combined with SQuAD currently achieves 64.35% F1. This task has an active leaderboard and is available as round 1 of the QA task on Dynabench and ranks models based on F1 score.",
"### Languages\n\nThe text in the dataset is in English. The associated BCP-47 code is 'en'.",
"## Dataset Structure",
"### Data Instances\n\nData is provided in the same format as SQuAD 1.1. An example is shown below:",
"### Data Fields\n\n- title: the title of the Wikipedia page from which the context is sourced\n- context: the context/passage\n- id: a string identifier for each question\n- answers: a list of all provided answers (one per question in our case, but multiple may exist in SQuAD) with an 'answer_start' field which is the character index of the start of the answer span, and a 'text' field which is the answer text.\n\nNote that no answers are provided in the test set. Indeed, this dataset is part of the DynaBench benchmark, for which you can submit your predictions on the website.",
"### Data Splits\n\nThe dataset is composed of three different datasets constructed using different models in the loop: BiDAF, BERT-Large, and RoBERTa-Large. Each of these has 10,000 training examples, 1,000 validation examples, and 1,000 test examples for a total of 30,000/3,000/3,000 train/validation/test examples.",
"## Dataset Creation",
"### Curation Rationale\n\nThis dataset was collected to provide a more challenging and diverse Reading Comprehension dataset to state-of-the-art models.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nThe source passages are from Wikipedia and are the same as those used in SQuAD v1.1.",
"#### Who are the source language producers?\n\nThe source language produces are Wikipedia editors for the passages, and human annotators on Mechanical Turk for the questions.",
"### Annotations",
"#### Annotation process\n\nThe dataset is collected through an adversarial human annotation process which pairs a human annotator and a reading comprehension model in an interactive setting. The human is presented with a passage for which they write a question and highlight the correct answer. The model then tries to answer the question, and, if it fails to answer correctly, the human wins. Otherwise, the human modifies or re-writes their question until the successfully fool the model.",
"#### Who are the annotators?\n\nThe annotators are from Amazon Mechanical Turk, geographically restricted the the USA, UK and Canada, having previously successfully completed at least 1,000 HITs, and having a HIT approval rate greater than 98%. Crowdworkers undergo intensive training and qualification prior to annotation.",
"### Personal and Sensitive Information\n\nNo annotator identifying details are provided.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe purpose of this dataset is to help develop better question answering systems.\n\nA system that succeeds at the supported task would be able to provide an accurate extractive answer from a short passage. This dataset is to be seen as a test bed for questions which contemporary state-of-the-art models struggle to answer correctly, thus often requiring more complex comprehension abilities than say detecting phrases explicitly mentioned in the passage with high overlap to the question.\n\nIt should be noted, however, that the the source passages are both domain-restricted and linguistically specific, and that provided questions and answers do not constitute any particular social application.",
"### Discussion of Biases\n\nThe dataset may exhibit various biases in terms of the source passage selection, annotated questions and answers, as well as algorithmic biases resulting from the adversarial annotation protocol.",
"### Other Known Limitations\n\nN/a",
"## Additional Information",
"### Dataset Curators\n\nThis dataset was initially created by Max Bartolo, Alastair Roberts, Johannes Welbl, Sebastian Riedel, and Pontus Stenetorp, during work carried out at University College London (UCL).",
"### Licensing Information\n\nThis dataset is distributed under CC BY-SA 3.0.",
"### Contributions\n\nThanks to @maxbartolo for adding this dataset."
] | [
121,
8,
120,
64,
191,
134,
25,
6,
24,
141,
84,
5,
36,
4,
31,
37,
5,
107,
73,
18,
8,
154,
52,
10,
5,
50,
19,
17
] | [
"passage: TAGS\n#task_categories-question-answering #task_ids-extractive-qa #task_ids-open-domain-qa #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-4.0 #arxiv-2002.00293 #arxiv-1606.05250 #region-us \n# Dataset Card for adversarialQA## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: adversarialQA homepage\n- Repository: adversarialQA repository\n- Paper: Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension\n- Leaderboard: Dynabench QA Round 1 Leaderboard\n- Point of Contact: Max Bartolo### Dataset Summary\n\nWe have created three new Reading Comprehension datasets constructed using an adversarial model-in-the-loop.\n\nWe use three different models; BiDAF (Seo et al., 2016), BERTLarge (Devlin et al., 2018), and RoBERTaLarge (Liu et al., 2019) in the annotation loop and construct three datasets; D(BiDAF), D(BERT), and D(RoBERTa), each with 10,000 training examples, 1,000 validation, and 1,000 test examples.\n\nThe adversarial human annotation paradigm ensures that these datasets consist of questions that current state-of-the-art models (at least the ones used as adversaries in the annotation loop) find challenging. The three AdversarialQA round 1 datasets provide a training and evaluation resource for such methods.",
"passage: ### Supported Tasks and Leaderboards\n\n'extractive-qa': The dataset can be used to train a model for Extractive Question Answering, which consists in selecting the answer to a question from a passage. Success on this task is typically measured by achieving a high word-overlap F1 score. The RoBERTa-Large model trained on all the data combined with SQuAD currently achieves 64.35% F1. This task has an active leaderboard and is available as round 1 of the QA task on Dynabench and ranks models based on F1 score.### Languages\n\nThe text in the dataset is in English. The associated BCP-47 code is 'en'.## Dataset Structure### Data Instances\n\nData is provided in the same format as SQuAD 1.1. An example is shown below:### Data Fields\n\n- title: the title of the Wikipedia page from which the context is sourced\n- context: the context/passage\n- id: a string identifier for each question\n- answers: a list of all provided answers (one per question in our case, but multiple may exist in SQuAD) with an 'answer_start' field which is the character index of the start of the answer span, and a 'text' field which is the answer text.\n\nNote that no answers are provided in the test set. Indeed, this dataset is part of the DynaBench benchmark, for which you can submit your predictions on the website.### Data Splits\n\nThe dataset is composed of three different datasets constructed using different models in the loop: BiDAF, BERT-Large, and RoBERTa-Large. Each of these has 10,000 training examples, 1,000 validation examples, and 1,000 test examples for a total of 30,000/3,000/3,000 train/validation/test examples.## Dataset Creation### Curation Rationale\n\nThis dataset was collected to provide a more challenging and diverse Reading Comprehension dataset to state-of-the-art models.### Source Data#### Initial Data Collection and Normalization\n\nThe source passages are from Wikipedia and are the same as those used in SQuAD v1.1.#### Who are the source language producers?\n\nThe source language produces are Wikipedia editors for the passages, and human annotators on Mechanical Turk for the questions.### Annotations#### Annotation process\n\nThe dataset is collected through an adversarial human annotation process which pairs a human annotator and a reading comprehension model in an interactive setting. The human is presented with a passage for which they write a question and highlight the correct answer. The model then tries to answer the question, and, if it fails to answer correctly, the human wins. Otherwise, the human modifies or re-writes their question until the successfully fool the model."
] | [
-0.05630020797252655,
0.19685325026512146,
-0.006371346302330494,
0.03517504036426544,
0.04052634537220001,
0.009618441574275494,
0.06055770814418793,
0.08151401579380035,
0.03243812173604965,
0.11856482177972794,
-0.010841995477676392,
-0.008391423150897026,
0.10073594748973846,
0.10288513451814651,
0.012146544642746449,
-0.13071994483470917,
0.05495709180831909,
-0.05239538848400116,
0.05729576200246811,
0.09024684876203537,
0.10556032508611679,
-0.06113133952021599,
0.031154662370681763,
-0.029739225283265114,
0.01017989031970501,
0.007833891548216343,
-0.04838322103023529,
-0.03419020026922226,
0.08940714597702026,
0.03101332113146782,
0.09698519110679626,
0.013526794500648975,
0.044793400913476944,
-0.2527649700641632,
0.025098729878664017,
0.08201386034488678,
0.03762325644493103,
0.025311872363090515,
0.07041185349225998,
-0.004951608367264271,
0.03973281756043434,
-0.06448249518871307,
0.11558616161346436,
0.04863298684358597,
-0.10678134858608246,
-0.12485989928245544,
-0.10829806327819824,
0.024184666574001312,
0.07566436380147934,
0.061314791440963745,
-0.03922555223107338,
0.06204915791749954,
-0.01404210552573204,
0.0343388170003891,
0.06895513832569122,
-0.1280091255903244,
-0.041709162294864655,
0.009101364761590958,
0.009223145432770252,
0.0682130828499794,
-0.09196947515010834,
-0.02633179910480976,
-0.016794083639979362,
0.040621623396873474,
-0.011066056787967682,
-0.019034819677472115,
0.03447199612855911,
-0.015752656385302544,
-0.11691027879714966,
-0.04603184759616852,
0.09625406563282013,
-0.014685467816889286,
-0.08209167420864105,
-0.16225454211235046,
-0.014902852475643158,
0.09771380573511124,
-0.017380744218826294,
-0.030303336679935455,
-0.0003269402077421546,
0.0019529936835169792,
0.0708235502243042,
-0.04179200530052185,
-0.0847943127155304,
-0.02470506727695465,
-0.03309943526983261,
0.06106670945882797,
0.04124094173312187,
0.021323367953300476,
0.009880824014544487,
0.07296457886695862,
-0.011156082153320312,
-0.07767441868782043,
-0.054779183119535446,
-0.07214692234992981,
-0.16100141406059265,
-0.025007037445902824,
-0.015451568178832531,
-0.10911984741687775,
0.05056947469711304,
0.18009820580482483,
-0.07032918930053711,
0.0596432238817215,
-0.09124718606472015,
-0.037778355181217194,
0.06268148124217987,
0.14280486106872559,
-0.03153209015727043,
-0.08086174726486206,
-0.005004418082535267,
0.05387182533740997,
-0.02063647285103798,
-0.02928602695465088,
0.02024872973561287,
0.010007433593273163,
0.009043914265930653,
0.09289747476577759,
0.07433272153139114,
0.029147859662771225,
-0.0719340518116951,
-0.01715736649930477,
0.08158411830663681,
-0.14852049946784973,
0.005919860675930977,
0.04947039484977722,
0.006484569050371647,
-0.007588089909404516,
-0.010018255561590195,
-0.012306862510740757,
-0.08749919384717941,
0.038668133318424225,
-0.029462531208992004,
-0.03805217519402504,
-0.06210814416408539,
-0.07221762835979462,
0.044660963118076324,
-0.03779969736933708,
-0.07111681252717972,
-0.027875065803527832,
-0.08614977449178696,
-0.07369641959667206,
0.005800720304250717,
-0.06745138764381409,
-0.016155816614627838,
-0.00990595668554306,
0.029112838208675385,
-0.0026304186321794987,
0.03159467503428459,
0.0259011872112751,
-0.02973986603319645,
0.035857848823070526,
-0.015745587646961212,
0.024075379595160484,
0.06406230479478836,
0.012142245657742023,
-0.07839962840080261,
0.060820214450359344,
-0.09209923446178436,
0.12626639008522034,
-0.11759138107299805,
-0.056587472558021545,
-0.10385537147521973,
0.02758394181728363,
0.01285523921251297,
0.033324554562568665,
0.005409673787653446,
0.10272541642189026,
-0.19570443034172058,
0.002377021126449108,
0.12919697165489197,
-0.12360687553882599,
-0.03792646527290344,
0.08363865315914154,
-0.05242285877466202,
0.031126363202929497,
0.07033415138721466,
0.08500443398952484,
0.052185140550136566,
-0.08451133221387863,
-0.07067281752824783,
-0.048137493431568146,
0.04899445176124573,
0.1929689198732376,
0.06352084875106812,
-0.090395987033844,
0.0565166249871254,
0.02723981812596321,
-0.052242908626794815,
-0.057902611792087555,
0.019803892821073532,
-0.05299006402492523,
0.006325455382466316,
-0.02481134794652462,
-0.0034798160195350647,
-0.023002073168754578,
-0.06816522777080536,
-0.015184445306658745,
-0.08926654607057571,
-0.09701891988515854,
0.060262903571128845,
-0.02090960554778576,
0.011820380575954914,
-0.09763839840888977,
0.00924915075302124,
-0.04596654698252678,
0.014498457312583923,
-0.17826983332633972,
-0.12283960729837418,
0.06955070793628693,
-0.10585415363311768,
0.07647599279880524,
0.0010570921003818512,
0.0115815419703722,
-0.005800748243927956,
-0.04475443810224533,
0.015202576294541359,
-0.04655168950557709,
0.002096253214403987,
-0.027545535936951637,
-0.12772829830646515,
-0.037252720445394516,
-0.06975330412387848,
0.11121775954961777,
-0.10593993961811066,
-0.006563050672411919,
0.09202652424573898,
0.10879155993461609,
0.050919752568006516,
-0.07597723603248596,
0.042705848813056946,
0.03822264075279236,
0.011685055680572987,
-0.030211515724658966,
-0.017470134422183037,
-0.01640879362821579,
-0.03917799890041351,
0.06355465203523636,
-0.13418912887573242,
-0.13629108667373657,
0.01549940649420023,
0.0903318002820015,
-0.08331628143787384,
-0.0373866930603981,
-0.03066614456474781,
-0.03270837664604187,
-0.08680319786071777,
-0.06911585479974747,
0.14290529489517212,
0.08596502244472504,
0.028836550191044807,
-0.06247132644057274,
-0.050874218344688416,
-0.04632918909192085,
0.04404761642217636,
-0.04100014269351959,
0.07037272304296494,
0.04563775658607483,
-0.10379961133003235,
0.0603148452937603,
0.008287913165986538,
0.05311581492424011,
0.10714249312877655,
-0.05203152447938919,
-0.11670789122581482,
0.005415184423327446,
0.010161245241761208,
0.003923104144632816,
0.09304694086313248,
-0.022080257534980774,
0.04991522803902626,
0.05438176542520523,
0.011282335966825485,
0.01575220748782158,
-0.05314524471759796,
0.03774658590555191,
0.02009878307580948,
-0.02439570613205433,
-0.043757468461990356,
-0.026051538065075874,
0.06205432116985321,
0.09910523146390915,
0.01664130389690399,
0.08870731294155121,
-0.038998767733573914,
-0.07001923769712448,
-0.09405556321144104,
0.13093335926532745,
-0.10042832791805267,
-0.21788087487220764,
-0.11083984375,
-0.003794843563809991,
-0.027040692046284676,
-0.015951409935951233,
0.002419633325189352,
-0.04885299131274223,
-0.07424866408109665,
-0.08686384558677673,
0.06116492301225662,
0.048449043184518814,
-0.09783284366130829,
0.02685890905559063,
0.006076274439692497,
0.02957521378993988,
-0.14115765690803528,
0.028189871460199356,
0.03357011452317238,
-0.05764325335621834,
-0.008335755206644535,
0.023366693407297134,
0.09277981519699097,
0.08659697324037552,
0.07516267895698547,
-0.0444096177816391,
-0.027275560423731804,
0.2326614409685135,
-0.12813103199005127,
0.11370120942592621,
0.06356576085090637,
-0.07995054125785828,
0.05344093218445778,
0.1524389088153839,
0.010198562406003475,
-0.05513909459114075,
0.019864577800035477,
0.09240828454494476,
-0.03242434933781624,
-0.24242618680000305,
-0.04446631669998169,
-0.029420219361782074,
-0.04008609801530838,
0.05524319410324097,
0.0424877293407917,
-0.022136159241199493,
0.0058070882223546505,
-0.09586167335510254,
-0.01717354543507099,
0.08657480776309967,
0.04884304106235504,
0.09278972446918488,
0.000661670695990324,
0.05710071697831154,
-0.05845656245946884,
-0.0008410662412643433,
0.07014385610818863,
0.08075685799121857,
0.16909357905387878,
-0.03649165853857994,
0.16697221994400024,
0.03953004628419876,
0.028146516531705856,
-0.002016614191234112,
0.04548623785376549,
-0.027608554810285568,
0.06031251698732376,
-0.0194949172437191,
-0.060634396970272064,
-0.024302862584590912,
0.06711503863334656,
0.05490056052803993,
-0.0771060436964035,
0.044832345098257065,
-0.04243803769350052,
0.038501281291246414,
0.1845274567604065,
0.04303212836384773,
-0.03229004144668579,
-0.04363168403506279,
0.046201758086681366,
-0.08762766420841217,
-0.0963655412197113,
0.028541071340441704,
0.06760333478450775,
-0.12489484995603561,
0.06815474480390549,
-0.02227059006690979,
0.09436789155006409,
-0.09032014012336731,
-0.008363611064851284,
-0.007017097435891628,
0.03929779306054115,
-0.022457802668213844,
0.06146728992462158,
-0.22145912051200867,
0.09057318419218063,
0.012312588281929493,
0.05598381906747818,
-0.06602343916893005,
0.05346385017037392,
-0.009394848719239235,
-0.03271843492984772,
0.10299955308437347,
-0.00005161017179489136,
-0.09734028577804565,
-0.04932545870542526,
-0.07991072535514832,
0.04968897998332977,
0.06028946116566658,
-0.09053090214729309,
0.11423145234584808,
-0.009684297256171703,
0.00899617187678814,
-0.041798703372478485,
0.008268583565950394,
-0.08645389974117279,
-0.2449761927127838,
0.0696425661444664,
-0.0633229911327362,
0.03755888715386391,
-0.07795760035514832,
-0.012439844198524952,
0.014244371093809605,
0.04313424229621887,
-0.1335216760635376,
-0.09124957025051117,
-0.118064284324646,
-0.03161782771348953,
0.16071650385856628,
-0.039420053362846375,
0.013099906966090202,
-0.024264326319098473,
0.1521032750606537,
-0.04359599947929382,
-0.030483342707157135,
0.015879284590482712,
-0.02757001295685768,
-0.20781266689300537,
-0.04798087850213051,
0.1326826810836792,
0.0654209777712822,
0.04904544726014137,
0.018323780968785286,
0.0556320995092392,
0.04220520704984665,
-0.08223775029182434,
0.05126242712140083,
0.05624697357416153,
-0.00025150831788778305,
0.08692856878042221,
-0.0329098254442215,
-0.05132869631052017,
-0.12973113358020782,
-0.05422905087471008,
0.0737295001745224,
0.17066311836242676,
-0.031754519790410995,
0.14320993423461914,
0.13495413959026337,
-0.12374242395162582,
-0.1769692599773407,
-0.03907949477434158,
0.04347023367881775,
-0.019051942974328995,
0.05612529441714287,
-0.2150755524635315,
0.013123879209160805,
0.053797751665115356,
-0.01672879420220852,
0.013140944764018059,
-0.17272904515266418,
-0.1053672507405281,
0.03731418773531914,
0.017663152888417244,
-0.09449608623981476,
-0.1301312893629074,
-0.0432206466794014,
-0.00892582070082426,
-0.13502880930900574,
0.08244753628969193,
0.048125624656677246,
0.027471931651234627,
-0.011550835333764553,
0.026457207277417183,
0.048641033470630646,
-0.03554829955101013,
0.12357790768146515,
-0.015276683494448662,
0.04516556113958359,
-0.0510777086019516,
0.033755384385585785,
0.028521660715341568,
-0.012044758535921574,
0.08143533766269684,
0.016550203785300255,
0.028041290119290352,
-0.10929141938686371,
-0.05454946681857109,
-0.0657045766711235,
-0.006824160926043987,
-0.09361971914768219,
-0.04220673441886902,
-0.03174437955021858,
0.07242828607559204,
0.06863560527563095,
-0.035056039690971375,
0.02639618329703808,
-0.06999185681343079,
0.09767342358827591,
0.11010438203811646,
0.1297532021999359,
0.08167551457881927,
-0.07600921392440796,
-0.0071159545332193375,
-0.0033506620675325394,
0.021482329815626144,
-0.041006796061992645,
0.0691433995962143,
0.08985293656587601,
0.013404211960732937,
0.12278307974338531,
-0.030487176030874252,
-0.17662101984024048,
0.013430563732981682,
0.025796040892601013,
-0.06558602303266525,
-0.21803376078605652,
0.026521094143390656,
0.04518583416938782,
-0.13527879118919373,
-0.07071612775325775,
0.11375391483306885,
0.029029829427599907,
-0.060530103743076324,
0.00889557134360075,
0.09714823961257935,
0.05807909369468689,
0.09880338609218597,
0.028654996305704117,
0.05380606651306152,
-0.06252633035182953,
0.07936925441026688,
0.12835995852947235,
-0.07429273426532745,
0.041176117956638336,
0.09158669412136078,
-0.04074745625257492,
-0.036265067756175995,
-0.0014476999640464783,
0.05770839750766754,
0.045174889266490936,
-0.02974838577210903,
0.0069586290046572685,
-0.11556551605463028,
0.04489590600132942,
0.12878234684467316,
-0.024897372350096703,
0.06610876321792603,
0.019143734127283096,
-0.027876339852809906,
-0.04830353707075119,
0.11691132187843323,
-0.0077471137046813965,
0.040880344808101654,
-0.017728500068187714,
-0.0125966165214777,
-0.028024040162563324,
-0.03915224224328995,
0.010682620108127594,
-0.048262350261211395,
-0.09030155837535858,
-0.0001516244374215603,
-0.23525647819042206,
0.024472495540976524,
-0.057645536959171295,
0.013881171122193336,
0.005355393514037132,
-0.033422693610191345,
0.03255170211195946,
0.011238732375204563,
-0.04581816494464874,
-0.03349154070019722,
-0.024617981165647507,
0.07891623675823212,
-0.15437491238117218,
0.0217241570353508,
0.08802396059036255,
-0.08081571012735367,
0.10297094285488129,
-0.02290232479572296,
-0.04759424552321434,
0.04576066881418228,
-0.09985228627920151,
0.03399015963077545,
-0.05938354879617691,
0.05648941174149513,
-0.015483890660107136,
-0.10984092950820923,
0.0028455452993512154,
-0.007082778960466385,
-0.07152602821588516,
0.02502841129899025,
0.06718362867832184,
-0.07792527973651886,
0.04997731000185013,
0.04969733953475952,
-0.055777885019779205,
-0.02544369362294674,
0.03477964177727699,
0.0852997750043869,
0.01638062298297882,
0.1120612621307373,
-0.04662969335913658,
0.07759237289428711,
-0.1218046247959137,
-0.032083600759506226,
0.023051118478178978,
0.05250464379787445,
0.05068044364452362,
-0.03621843084692955,
0.05456202104687691,
-0.009860378690063953,
0.17780224978923798,
-0.07792719453573227,
0.04328259453177452,
0.056608133018016815,
-0.02774157002568245,
-0.03796249255537987,
0.04786459729075432,
-0.02709474042057991,
-0.005109492689371109,
-0.02465010993182659,
0.009199392050504684,
-0.08080939948558807,
-0.07255696505308151,
0.017491953447461128,
0.0802692323923111,
0.12223903089761734,
0.18474027514457703,
-0.02510889247059822,
0.06617110222578049,
-0.04409034550189972,
0.0019537881016731262,
0.052441973239183426,
-0.04405009746551514,
-0.0020000720396637917,
-0.0685688704252243,
0.0936841294169426,
0.06113455817103386,
-0.12863636016845703,
0.12906670570373535,
-0.0818333774805069,
-0.05268484354019165,
-0.051812268793582916,
-0.16173213720321655,
-0.045652229338884354,
0.022218206897377968,
0.007310240529477596,
-0.10288297384977341,
0.060354895889759064,
0.08150321245193481,
0.0056393807753920555,
-0.02249211259186268,
0.0856008231639862,
-0.028348010033369064,
-0.06324966251850128,
0.06046571582555771,
0.035308077931404114,
0.0473841056227684,
0.09828720986843109,
0.07591463625431061,
0.03247207775712013,
0.0802667886018753,
0.07781478017568588,
0.07899930328130722,
-0.01733222045004368,
-0.01779281161725521,
-0.051886171102523804,
-0.05548819154500961,
0.03536612540483475,
-0.040420714765787125,
-0.020607516169548035,
0.20304343104362488,
0.03409029543399811,
0.015467389486730099,
0.0141737200319767,
0.2253297120332718,
-0.011815967038273811,
-0.10721203684806824,
-0.19149306416511536,
0.04929113760590553,
-0.021432748064398766,
0.003215672681108117,
0.04615796357393265,
-0.11820539087057114,
0.024002274498343468,
0.10126300901174545,
0.09898176789283752,
-0.024135032668709755,
0.024648455902934074,
0.00780070386826992,
0.01806793361902237,
-0.0013362984172999859,
0.05768275633454323,
0.013947556726634502,
0.21204854547977448,
-0.04646173119544983,
0.06190953776240349,
0.0021341638639569283,
-0.01927756890654564,
-0.04270072281360626,
0.1541319042444229,
-0.05004952847957611,
-0.0028293102513998747,
-0.10351136326789856,
0.07950888574123383,
-0.039773255586624146,
-0.314049631357193,
-0.028757810592651367,
-0.07135692238807678,
-0.13533209264278412,
-0.02367308735847473,
0.006357062608003616,
-0.015310863964259624,
-0.005517620127648115,
0.0694778636097908,
-0.015218116343021393,
0.18054376542568207,
0.01658066362142563,
0.03377702087163925,
-0.04328435659408569,
0.12991958856582642,
-0.044530753046274185,
0.16298481822013855,
0.04534912109375,
0.06893505156040192,
0.07679732143878937,
0.00011120922863483429,
-0.0908636599779129,
0.06953872740268707,
0.06773924827575684,
-0.009418150410056114,
-0.00961154606193304,
0.17728328704833984,
0.023619281128048897,
0.14185935258865356,
0.09980478137731552,
-0.00005793944001197815,
0.06875480711460114,
-0.008276363834738731,
-0.04782301187515259,
-0.07974044978618622,
0.0864354819059372,
-0.100504070520401,
0.10974143445491791,
0.14188435673713684,
-0.02237813174724579,
0.028344810009002686,
-0.03021911159157753,
0.011585268191993237,
-0.05039678514003754,
0.1300922930240631,
-0.03524581715464592,
-0.12465601414442062,
0.021661074832081795,
-0.022407665848731995,
0.07601197808980942,
-0.16845378279685974,
-0.04715587943792343,
0.05155903473496437,
-0.01860266551375389,
-0.00020203180611133575,
0.11017182469367981,
0.0793614387512207,
0.020417962223291397,
-0.04553968459367752,
-0.10043749213218689,
0.038323819637298584,
0.09116019308567047,
-0.0741925835609436,
0.0009633339941501617
] |
2305f2e63b68056f9b9037a3805c8c196e0d5581 |
# Dataset Card for "aeslc"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/ryanzhumich/AESLC
- **Paper:** [This Email Could Save Your Life: Introducing the Task of Email Subject Line Generation](https://arxiv.org/abs/1906.03497)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 11.64 MB
- **Size of the generated dataset:** 14.95 MB
- **Total amount of disk used:** 26.59 MB
### Dataset Summary
A collection of email messages of employees in the Enron Corporation.
There are two features:
- email_body: email body text.
- subject_line: email subject text.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
Monolingual English (mainly en-US) with some exceptions.
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 11.64 MB
- **Size of the generated dataset:** 14.95 MB
- **Total amount of disk used:** 26.59 MB
An example of 'train' looks as follows.
```
{
"email_body": "B/C\n<<some doc>>\n",
"subject_line": "Service Agreement"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `email_body`: a `string` feature.
- `subject_line`: a `string` feature.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default|14436| 1960|1906|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{zhang-tetreault-2019-email,
title = "This Email Could Save Your Life: Introducing the Task of Email Subject Line Generation",
author = "Zhang, Rui and
Tetreault, Joel",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P19-1043",
doi = "10.18653/v1/P19-1043",
pages = "446--456",
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun) for adding this dataset. | aeslc | [
"task_categories:summarization",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"aspect-based-summarization",
"conversations-summarization",
"multi-document-summarization",
"email-headline-generation",
"arxiv:1906.03497",
"region:us"
] | 2022-03-02T23:29:22+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["summarization"], "task_ids": [], "paperswithcode_id": "aeslc", "pretty_name": "AESLC: Annotated Enron Subject Line Corpus", "tags": ["aspect-based-summarization", "conversations-summarization", "multi-document-summarization", "email-headline-generation"], "dataset_info": {"features": [{"name": "email_body", "dtype": "string"}, {"name": "subject_line", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11897245, "num_examples": 14436}, {"name": "validation", "num_bytes": 1659987, "num_examples": 1960}, {"name": "test", "num_bytes": 1383452, "num_examples": 1906}], "download_size": 7948020, "dataset_size": 14940684}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2024-01-09T11:49:13+00:00 | [
"1906.03497"
] | [
"en"
] | TAGS
#task_categories-summarization #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #aspect-based-summarization #conversations-summarization #multi-document-summarization #email-headline-generation #arxiv-1906.03497 #region-us
| Dataset Card for "aeslc"
========================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage:
* Repository: URL
* Paper: This Email Could Save Your Life: Introducing the Task of Email Subject Line Generation
* Point of Contact:
* Size of downloaded dataset files: 11.64 MB
* Size of the generated dataset: 14.95 MB
* Total amount of disk used: 26.59 MB
### Dataset Summary
A collection of email messages of employees in the Enron Corporation.
There are two features:
* email\_body: email body text.
* subject\_line: email subject text.
### Supported Tasks and Leaderboards
### Languages
Monolingual English (mainly en-US) with some exceptions.
Dataset Structure
-----------------
### Data Instances
#### default
* Size of downloaded dataset files: 11.64 MB
* Size of the generated dataset: 14.95 MB
* Total amount of disk used: 26.59 MB
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
#### default
* 'email\_body': a 'string' feature.
* 'subject\_line': a 'string' feature.
### Data Splits
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @patrickvonplaten, @thomwolf, @lewtun for adding this dataset.
| [
"### Dataset Summary\n\n\nA collection of email messages of employees in the Enron Corporation.\n\n\nThere are two features:\n\n\n* email\\_body: email body text.\n* subject\\_line: email subject text.",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nMonolingual English (mainly en-US) with some exceptions.\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"#### default\n\n\n* Size of downloaded dataset files: 11.64 MB\n* Size of the generated dataset: 14.95 MB\n* Total amount of disk used: 26.59 MB\n\n\nAn example of 'train' looks as follows.",
"### Data Fields\n\n\nThe data fields are the same among all splits.",
"#### default\n\n\n* 'email\\_body': a 'string' feature.\n* 'subject\\_line': a 'string' feature.",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\n\nThanks to @patrickvonplaten, @thomwolf, @lewtun for adding this dataset."
] | [
"TAGS\n#task_categories-summarization #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #aspect-based-summarization #conversations-summarization #multi-document-summarization #email-headline-generation #arxiv-1906.03497 #region-us \n",
"### Dataset Summary\n\n\nA collection of email messages of employees in the Enron Corporation.\n\n\nThere are two features:\n\n\n* email\\_body: email body text.\n* subject\\_line: email subject text.",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nMonolingual English (mainly en-US) with some exceptions.\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"#### default\n\n\n* Size of downloaded dataset files: 11.64 MB\n* Size of the generated dataset: 14.95 MB\n* Total amount of disk used: 26.59 MB\n\n\nAn example of 'train' looks as follows.",
"### Data Fields\n\n\nThe data fields are the same among all splits.",
"#### default\n\n\n* 'email\\_body': a 'string' feature.\n* 'subject\\_line': a 'string' feature.",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\n\nThanks to @patrickvonplaten, @thomwolf, @lewtun for adding this dataset."
] | [
116,
44,
10,
27,
6,
49,
17,
32,
11,
7,
4,
10,
10,
5,
5,
9,
18,
7,
8,
14,
6,
6,
28
] | [
"passage: TAGS\n#task_categories-summarization #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #aspect-based-summarization #conversations-summarization #multi-document-summarization #email-headline-generation #arxiv-1906.03497 #region-us \n### Dataset Summary\n\n\nA collection of email messages of employees in the Enron Corporation.\n\n\nThere are two features:\n\n\n* email\\_body: email body text.\n* subject\\_line: email subject text.### Supported Tasks and Leaderboards### Languages\n\n\nMonolingual English (mainly en-US) with some exceptions.\n\n\nDataset Structure\n-----------------### Data Instances#### default\n\n\n* Size of downloaded dataset files: 11.64 MB\n* Size of the generated dataset: 14.95 MB\n* Total amount of disk used: 26.59 MB\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### default\n\n\n* 'email\\_body': a 'string' feature.\n* 'subject\\_line': a 'string' feature.### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators### Licensing Information### Contributions\n\n\nThanks to @patrickvonplaten, @thomwolf, @lewtun for adding this dataset."
] | [
-0.029901545494794846,
0.23830586671829224,
-0.005871639586985111,
0.04570554569363594,
0.07980197668075562,
0.02176634408533573,
0.08672069013118744,
0.1049494668841362,
-0.04988327994942665,
0.17163383960723877,
0.04765167459845543,
0.01201436948031187,
0.10775231570005417,
0.17226573824882507,
-0.010894332081079483,
-0.2192162126302719,
0.011028758250176907,
-0.09845507144927979,
-0.061503127217292786,
0.09270159155130386,
0.1243402287364006,
-0.08119351416826248,
0.08188404142856598,
-0.10885454714298248,
-0.06258969753980637,
0.020824454724788666,
-0.07319152355194092,
0.004844771698117256,
0.012159843929111958,
0.07526711374521255,
0.00848871935158968,
0.003957777284085751,
0.06220797821879387,
-0.277404248714447,
0.009016991592943668,
0.04473443329334259,
-0.0003225351683795452,
0.0488419309258461,
0.1101832166314125,
-0.055156160145998,
0.14652538299560547,
-0.18163731694221497,
0.045326828956604004,
0.03646962344646454,
-0.09441832453012466,
-0.08715079724788666,
-0.1496925950050354,
0.06563319265842438,
0.12370152026414871,
0.06297529488801956,
-0.051127105951309204,
0.0076773641631007195,
-0.04800364375114441,
0.08633895963430405,
0.12271133810281754,
-0.072655588388443,
-0.06123824045062065,
0.015407734550535679,
0.02933683432638645,
0.08478540182113647,
-0.08862565457820892,
-0.021651456132531166,
0.011442623101174831,
0.034340839833021164,
0.032412897795438766,
-0.01124994084239006,
-0.05807057023048401,
0.05336325243115425,
-0.13240385055541992,
-0.08462732285261154,
0.19254477322101593,
0.07404807955026627,
-0.00013705693709198385,
-0.1542636752128601,
0.0005735006416216493,
-0.050807856023311615,
-0.02114250883460045,
0.015670735388994217,
0.010753065347671509,
-0.03712375462055206,
-0.020816277712583542,
-0.03021429479122162,
-0.06630706787109375,
-0.019621845334768295,
-0.010217770002782345,
-0.026083074510097504,
0.006781081669032574,
-0.0005973779479973018,
0.027766868472099304,
0.03379245102405548,
-0.010433708317577839,
-0.14582179486751556,
0.0011785079259425402,
-0.006558584980666637,
-0.08099577575922012,
-0.013501920737326145,
0.03209371119737625,
-0.05891894921660423,
0.112242691218853,
0.0843452736735344,
-0.06871037930250168,
0.03045756369829178,
-0.04832380264997482,
-0.01258621085435152,
0.09917806833982468,
0.09897986799478531,
-0.09374205023050308,
-0.14263547956943512,
-0.034680645912885666,
0.025137046352028847,
0.012911451049149036,
-0.013828263618052006,
-0.04623636603355408,
0.08807829767465591,
0.08842556178569794,
0.1065547913312912,
0.13689865171909332,
0.014051010832190514,
-0.051146250218153,
-0.03155398368835449,
0.1079150140285492,
-0.16086901724338531,
0.06277625262737274,
0.039932094514369965,
-0.037062615156173706,
-0.03658299893140793,
-0.015787795186042786,
0.007561539299786091,
-0.0880662202835083,
0.08541880548000336,
-0.061321672052145004,
-0.021024402230978012,
-0.0705711618065834,
-0.09710415452718735,
0.08998700976371765,
-0.04085904732346535,
-0.04886463284492493,
-0.0016756055410951376,
-0.134665846824646,
-0.07686378061771393,
0.025097690522670746,
-0.11762279272079468,
-0.07067631930112839,
-0.03911670669913292,
-0.05810003727674484,
0.02623685821890831,
-0.00951800961047411,
0.11995924264192581,
-0.054334674030542374,
0.05377628654241562,
-0.043008167296648026,
0.018190952017903328,
0.05947502329945564,
0.04351935535669327,
-0.07023756951093674,
0.07888016849756241,
-0.06922024488449097,
0.1201324611902237,
-0.11214642226696014,
0.046367716044187546,
-0.1452561616897583,
-0.05181170254945755,
0.013790562748908997,
-0.018826991319656372,
0.01012290921062231,
0.1673673838376999,
-0.21056105196475983,
-0.041943930089473724,
0.08196841180324554,
-0.0522836409509182,
-0.0912790596485138,
0.06573472917079926,
-0.026864653453230858,
-0.008123542182147503,
0.03221646696329117,
0.1380545198917389,
-0.004432046785950661,
-0.03094497136771679,
-0.08241616189479828,
-0.05790496990084648,
0.056649062782526016,
0.17503446340560913,
0.11373858153820038,
-0.08422330766916275,
0.16295011341571808,
0.00982207152992487,
0.030134005472064018,
-0.038928426802158356,
-0.04936718940734863,
-0.07389219850301743,
0.0076535423286259174,
-0.04865849390625954,
-0.04250798746943474,
0.00841969158500433,
-0.054510656744241714,
-0.032567717134952545,
-0.0729907751083374,
0.011160604655742645,
0.017229489982128143,
-0.02005685865879059,
0.017138637602329254,
-0.020695721730589867,
0.00013629731256514788,
-0.014021324925124645,
0.015198113396763802,
-0.1629868894815445,
-0.13275682926177979,
0.03039724752306938,
0.020090721547603607,
0.10842778533697128,
-0.07042811810970306,
0.021218908950686455,
0.008223125711083412,
-0.04782501608133316,
0.002976622199639678,
0.05402165278792381,
0.022390125319361687,
-0.03127489984035492,
-0.22248725593090057,
-0.04160544276237488,
-0.031489189714193344,
0.12297380715608597,
-0.10094819217920303,
0.009706730023026466,
0.1410250961780548,
0.17010930180549622,
0.033088319003582,
-0.0788022056221962,
0.05930805578827858,
0.008182736113667488,
-0.03311258554458618,
-0.06881603598594666,
-0.03369687497615814,
-0.052918534725904465,
-0.02590753324329853,
0.004333179444074631,
-0.12891091406345367,
-0.019473928958177567,
0.06533408164978027,
0.09950879961252213,
-0.06474114209413528,
-0.04961026459932327,
-0.056276965886354446,
-0.04585441201925278,
-0.07014892250299454,
-0.07974881678819656,
0.011962815187871456,
0.04039795696735382,
0.016917528584599495,
-0.062033362686634064,
-0.07253342121839523,
-0.020677722990512848,
-0.012470051646232605,
-0.09702250361442566,
0.14292921125888824,
0.04200879484415054,
-0.1603308916091919,
0.16287125647068024,
-0.011899085715413094,
0.07005133479833603,
0.10887379199266434,
-0.046617742627859116,
-0.08134137839078903,
-0.05489986389875412,
0.021573346108198166,
0.019654754549264908,
0.07500237971544266,
-0.05353333055973053,
0.03890962898731232,
0.05586785450577736,
0.029819374904036522,
0.03949795290827751,
-0.04449218511581421,
0.030450554564595222,
0.023390091955661774,
-0.06299782544374466,
0.013047052547335625,
-0.027316877618432045,
0.024722300469875336,
0.1021619662642479,
0.035134270787239075,
0.10540721565485,
-0.02704915590584278,
-0.0749085545539856,
-0.1363748162984848,
0.1394781917333603,
-0.10940037667751312,
-0.1822052001953125,
-0.11344470828771591,
-0.12120885401964188,
-0.03742225095629692,
-0.028028786182403564,
0.029657889157533646,
-0.04517245292663574,
-0.07320784032344818,
-0.10851918160915375,
0.09353271871805191,
0.030201254412531853,
-0.06455054879188538,
-0.016356421634554863,
0.005611345637589693,
0.02114669606089592,
-0.08613500744104385,
0.02625907026231289,
0.06362876296043396,
-0.0007083356031216681,
-0.013806013390421867,
0.0362878292798996,
0.1616068035364151,
0.11527622491121292,
0.06414403766393661,
0.003927688580006361,
-0.011007174849510193,
0.2758256196975708,
-0.12076498568058014,
0.09395632147789001,
0.0700083076953888,
-0.029575778171420097,
0.06050005182623863,
0.23047411441802979,
0.025762902572751045,
-0.051966071128845215,
0.032213591039180756,
0.0741710290312767,
-0.03815887123346329,
-0.2460075318813324,
-0.10059724748134613,
-0.0719791129231453,
0.03067847713828087,
0.07700168341398239,
0.05485369265079498,
-0.01539981085807085,
-0.0000775842199800536,
-0.07102396339178085,
-0.006511644460260868,
0.06446478515863419,
0.08672651648521423,
0.021806804463267326,
0.02210027165710926,
0.05946420505642891,
-0.042006202042102814,
0.017779119312763214,
0.11607479304075241,
0.0010984312975779176,
0.18062527477741241,
-0.03336617350578308,
0.19248870015144348,
0.05577770248055458,
0.06961623579263687,
-0.05746712535619736,
0.027204502373933792,
-0.017086142674088478,
0.0735662505030632,
0.0011897919466719031,
-0.07963389903306961,
0.018077874556183815,
0.07122717797756195,
0.09264429658651352,
-0.04458414018154144,
0.023694051429629326,
-0.061467863619327545,
0.0661209374666214,
0.18085183203220367,
0.058963026851415634,
-0.11534011363983154,
-0.018483776599168777,
0.12685582041740417,
-0.06341318786144257,
-0.06964361667633057,
-0.04320235177874565,
0.10884682834148407,
-0.1349038928747177,
0.1074041947722435,
0.001898529240861535,
0.13301020860671997,
-0.043489664793014526,
-0.03268599882721901,
-0.004126488231122494,
0.013524891808629036,
-0.027529917657375336,
0.11062046885490417,
-0.1745861917734146,
0.15607629716396332,
0.016808807849884033,
-0.008490891195833683,
-0.0879804790019989,
0.05710815265774727,
-0.030134720727801323,
-0.006405575666576624,
0.1286197304725647,
0.008624294772744179,
-0.1283016949892044,
-0.008770433254539967,
-0.09286215156316757,
-0.010814634151756763,
0.09648094326257706,
-0.10984213650226593,
0.10629381239414215,
0.016497887670993805,
-0.03295494616031647,
-0.024418482556939125,
0.019395701587200165,
-0.06308719515800476,
-0.2056882381439209,
0.039474621415138245,
-0.09031902253627777,
0.05520426481962204,
-0.04648610204458237,
-0.01902666874229908,
-0.03595234453678131,
0.19865335524082184,
-0.14564087986946106,
-0.07886969298124313,
-0.14358195662498474,
0.09468035399913788,
0.18938075006008148,
-0.06324689835309982,
0.030434172600507736,
-0.02031121216714382,
0.1405005306005478,
-0.003510315204039216,
-0.046491239219903946,
0.05565250664949417,
-0.03550763800740242,
-0.18890729546546936,
-0.047582242637872696,
0.16592156887054443,
0.02650119550526142,
0.0688936784863472,
-0.05854472517967224,
0.07595687359571457,
-0.00700303865596652,
-0.09553074091672897,
0.04020557180047035,
0.09751392155885696,
0.08844642341136932,
0.1267736703157425,
-0.016389694064855576,
-0.1823318898677826,
-0.09544476866722107,
-0.08891873806715012,
0.11751024425029755,
0.2140968292951584,
-0.07176808267831802,
0.10915610194206238,
0.046324167400598526,
-0.0787886530160904,
-0.19434799253940582,
-0.027043037116527557,
0.03444550558924675,
0.011719472706317902,
0.04864020273089409,
-0.14886760711669922,
0.01852719485759735,
0.04169566556811333,
-0.007783521898090839,
0.03239459544420242,
-0.2351556420326233,
-0.1361638903617859,
-0.004512350540608168,
0.010801632888615131,
-0.2523628771305084,
-0.19044505059719086,
-0.12808607518672943,
-0.06832308322191238,
-0.18793123960494995,
0.11027365177869797,
0.004345409106463194,
0.017628230154514313,
0.01761660911142826,
0.05754699185490608,
0.018821444362401962,
-0.04389354959130287,
0.16466769576072693,
0.013594536110758781,
0.012401782907545567,
-0.09795064479112625,
-0.07371249049901962,
0.016648104414343834,
-0.0378996878862381,
0.12443184852600098,
-0.10046790540218353,
0.027732472866773605,
-0.08080070465803146,
-0.01787032000720501,
-0.08078011870384216,
0.011181707493960857,
-0.08717644214630127,
-0.04146867245435715,
-0.10621394217014313,
0.05821933597326279,
0.1013990268111229,
-0.027633462101221085,
0.07165190577507019,
-0.061657778918743134,
0.07698605954647064,
0.20911867916584015,
0.11292507499456406,
0.07851899415254593,
-0.04788682982325554,
-0.034609101712703705,
-0.030744412913918495,
-0.04167526215314865,
-0.17632612586021423,
0.02794385515153408,
0.11065322160720825,
0.03556861728429794,
0.1803102195262909,
-0.032560084015131,
-0.13687121868133545,
-0.03795513138175011,
0.07604970782995224,
-0.08223322033882141,
-0.19482895731925964,
0.01067061722278595,
0.07713834941387177,
-0.22306953370571136,
-0.08227255940437317,
0.053108736872673035,
0.028432393446564674,
-0.025345778092741966,
0.00014226049825083464,
0.14704866707324982,
-0.00297032343223691,
0.13015393912792206,
0.066397525370121,
0.06281585246324539,
-0.09338859468698502,
0.08273256570100784,
0.14804448187351227,
-0.08129497617483139,
0.017818545922636986,
0.21162252128124237,
-0.034030575305223465,
-0.023491989821195602,
0.10317441076040268,
0.09726778417825699,
0.03541800007224083,
-0.011924345046281815,
0.0070388359017670155,
-0.13946974277496338,
0.07326507568359375,
0.12407616525888443,
-0.013623801991343498,
0.05511696636676788,
0.035661302506923676,
-0.027596602216362953,
-0.04773206263780594,
0.16499608755111694,
0.07638055086135864,
0.03366894647479057,
-0.04371269419789314,
0.02955286018550396,
-0.030708128586411476,
0.0071173012256622314,
0.00481666624546051,
0.013636716641485691,
-0.08829499781131744,
-0.03260741010308266,
-0.06510049849748611,
0.01691998541355133,
-0.09901800006628036,
-0.011549923568964005,
-0.01982615701854229,
-0.08046523481607437,
-0.028798537328839302,
-0.0044311219826340675,
-0.04890774190425873,
-0.07011380791664124,
-0.08797148615121841,
0.08311174064874649,
-0.17955411970615387,
-0.027751466259360313,
0.08572916686534882,
-0.05961160361766815,
0.10103017836809158,
-0.011971594765782356,
-0.01647474430501461,
0.020546255633234978,
-0.09155407547950745,
0.005147886462509632,
-0.011527197435498238,
0.022367803379893303,
0.04639171063899994,
-0.1307791769504547,
-0.04484873265028,
0.00006926862261025235,
-0.04426777735352516,
0.04498763009905815,
-0.030411826446652412,
-0.10441857576370239,
0.054408200085163116,
-0.062502421438694,
-0.07123308628797531,
-0.03797653317451477,
0.09204598516225815,
0.06795154511928558,
-0.0178509633988142,
0.11342941969633102,
-0.045139580965042114,
0.09878012537956238,
-0.1777966469526291,
-0.006857767701148987,
0.003656788030639291,
-0.02677104063332081,
-0.012610707432031631,
0.015340019017457962,
0.08115114271640778,
-0.015121763572096825,
0.1783958226442337,
-0.0025429504457861185,
-0.05742614343762398,
0.04992423206567764,
0.07177534699440002,
0.034863319247961044,
0.06536933779716492,
0.060424234718084335,
-0.044167663902044296,
-0.032652970403432846,
-0.004673664923757315,
-0.032299548387527466,
-0.05512499809265137,
0.024519581347703934,
0.17218783497810364,
0.1415511667728424,
0.1518326997756958,
-0.011574048548936844,
0.07379946112632751,
-0.12939387559890747,
0.005038965959101915,
0.08979087322950363,
-0.02405056543648243,
0.05452807620167732,
-0.075784832239151,
0.08835457265377045,
0.05364594981074333,
-0.2363080233335495,
0.0969071090221405,
-0.05638688802719116,
-0.07974252849817276,
-0.02303408272564411,
-0.11352507770061493,
-0.07513098418712616,
-0.0328870490193367,
-0.013633555732667446,
-0.13083001971244812,
0.1119566559791565,
0.05948835611343384,
-0.006085081957280636,
-0.0073396507650613785,
0.09428179264068604,
-0.052410028874874115,
-0.04602472856640816,
0.004211803898215294,
0.0491366982460022,
-0.003856135532259941,
0.027295665815472603,
0.08150393515825272,
0.002101101214066148,
0.06335899233818054,
0.08273543417453766,
0.06690289080142975,
-0.004294689279049635,
0.035488951951265335,
-0.053607210516929626,
-0.08399730175733566,
0.012302429415285587,
-0.016770239919424057,
-0.019125154241919518,
0.2077571153640747,
0.021668165922164917,
0.031215710565447807,
-0.005787935107946396,
0.19404278695583344,
-0.07020170986652374,
-0.12857116758823395,
-0.13774630427360535,
0.045079879462718964,
-0.046996816992759705,
-0.005704965442419052,
0.008301212452352047,
-0.15761807560920715,
0.012453354895114899,
0.10141613334417343,
0.191083624958992,
-0.03149595111608505,
0.007060672622174025,
-0.003929677419364452,
0.02125883288681507,
0.00611687358468771,
-0.015055308118462563,
0.05035502836108208,
0.1983916014432907,
-0.0556805394589901,
0.06270049512386322,
-0.03303507715463638,
-0.042977768927812576,
-0.05981365218758583,
0.11170218884944916,
0.01880750246345997,
-0.015719249844551086,
-0.037398628890514374,
0.1470203548669815,
-0.08578838407993317,
-0.191451296210289,
0.002870278898626566,
-0.17016464471817017,
-0.14088140428066254,
-0.006481228396296501,
0.06886587291955948,
0.03578566014766693,
0.014320571906864643,
0.0527673065662384,
0.010310584679245949,
0.0959646925330162,
0.04592208191752434,
-0.08566118776798248,
-0.03344076871871948,
0.09273485839366913,
-0.0919976755976677,
0.16010628640651703,
0.0017423068638890982,
0.05873226746916771,
0.10726145654916763,
-0.052562836557626724,
-0.10399549454450607,
0.04160865396261215,
0.119832843542099,
0.006819544360041618,
0.04231540486216545,
0.1675034612417221,
-0.0174292903393507,
0.13168464601039886,
0.08625520020723343,
-0.028494982048869133,
0.006570117082446814,
0.03321252390742302,
-0.015131752006709576,
-0.09896867722272873,
0.05278149992227554,
-0.09332449734210968,
0.11690961569547653,
0.15547220408916473,
-0.06817080825567245,
0.0106606874614954,
-0.016270622611045837,
0.05440550670027733,
-0.03246529400348663,
0.19932852685451508,
0.009539954364299774,
-0.19429844617843628,
0.012258607894182205,
-0.05770306661725044,
0.10083598643541336,
-0.1935608685016632,
-0.05424055457115173,
0.0026845643296837807,
-0.0010408994276076555,
-0.07566146552562714,
0.15327897667884827,
0.09467852115631104,
-0.028688007965683937,
-0.05381196737289429,
-0.10681509971618652,
0.025342348963022232,
0.12414172291755676,
-0.07249604910612106,
0.013837505131959915
] |
445834a997dce8b40e1d108638064381de80c497 | "\n# Dataset Card for Afrikaans Ner Corpus\n\n## Table of Contents\n- [Dataset Description](#dataset(...TRUNCATED) | afrikaans_ner_corpus | ["task_categories:token-classification","task_ids:named-entity-recognition","annotations_creators:ex(...TRUNCATED) | 2022-03-02T23:29:22+00:00 | "{\"annotations_creators\": [\"expert-generated\"], \"language_creators\": [\"expert-generated\"], \(...TRUNCATED) | 2024-01-09T11:51:47+00:00 | [] | [
"af"
] | "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creator(...TRUNCATED) | "\n# Dataset Card for Afrikaans Ner Corpus\n\n## Table of Contents\n- Dataset Description\n - Datas(...TRUNCATED) | ["# Dataset Card for Afrikaans Ner Corpus","## Table of Contents\n- Dataset Description\n - Dataset(...TRUNCATED) | ["TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creato(...TRUNCATED) | [
95,
8,
120,
32,
83,
10,
11,
6,
89,
141,
11,
5,
21,
4,
27,
25,
5,
5,
25,
8,
8,
7,
8,
7,
5,
38,
18,
18
] | ["passage: TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotatio(...TRUNCATED) | [-0.005825735162943602,0.08974096179008484,-0.0041341837495565414,0.0642194151878357,0.0429573766887(...TRUNCATED) |
68a83b6cd4730be5e0ecbdbee941eef8f13aa867 | "\n# Dataset Card for \"ag_news\"\n\n## Table of Contents\n- [Dataset Description](#dataset-descript(...TRUNCATED) | ag_news | ["task_categories:text-classification","task_ids:topic-classification","annotations_creators:found",(...TRUNCATED) | 2022-03-02T23:29:22+00:00 | "{\"annotations_creators\": [\"found\"], \"language_creators\": [\"found\"], \"language\": [\"en\"],(...TRUNCATED) | 2024-01-18T10:52:09+00:00 | [] | [
"en"
] | "TAGS\n#task_categories-text-classification #task_ids-topic-classification #annotations_creators-fou(...TRUNCATED) | "Dataset Card for \"ag\\_news\"\n===========================\n\n\nTable of Contents\n---------------(...TRUNCATED) | ["### Dataset Summary\n\n\nAG is a collection of more than 1 million news articles. News articles ha(...TRUNCATED) | ["TAGS\n#task_categories-text-classification #task_ids-topic-classification #annotations_creators-fo(...TRUNCATED) | [
84,
226,
10,
11,
6,
50,
17,
52,
11,
7,
4,
10,
10,
5,
5,
9,
18,
7,
8,
14,
6,
6,
33
] | ["passage: TAGS\n#task_categories-text-classification #task_ids-topic-classification #annotations_cr(...TRUNCATED) | [-0.034058939665555954,0.1685732901096344,-0.005139347165822983,0.033307552337646484,0.0717983320355(...TRUNCATED) |
210d026faf9955653af8916fad021475a3f00453 | "\n# Dataset Card for \"ai2_arc\"\n\n## Table of Contents\n- [Dataset Description](#dataset-descript(...TRUNCATED) | allenai/ai2_arc | ["task_categories:question-answering","task_ids:open-domain-qa","task_ids:multiple-choice-qa","annot(...TRUNCATED) | 2022-03-02T23:29:22+00:00 | "{\"annotations_creators\": [\"found\"], \"language_creators\": [\"found\"], \"language\": [\"en\"],(...TRUNCATED) | 2023-12-21T15:09:48+00:00 | [
"1803.05457"
] | [
"en"
] | "TAGS\n#task_categories-question-answering #task_ids-open-domain-qa #task_ids-multiple-choice-qa #an(...TRUNCATED) | "Dataset Card for \"ai2\\_arc\"\n===========================\n\n\nTable of Contents\n---------------(...TRUNCATED) | ["### Dataset Summary\n\n\nA new dataset of 7,787 genuine grade-school level, multiple-choice scienc(...TRUNCATED) | ["TAGS\n#task_categories-question-answering #task_ids-open-domain-qa #task_ids-multiple-choice-qa #a(...TRUNCATED) | [
113,
130,
10,
11,
6,
58,
58,
17,
81,
81,
11,
7,
4,
10,
10,
5,
5,
9,
18,
7,
8,
14,
6,
6,
28
] | ["passage: TAGS\n#task_categories-question-answering #task_ids-open-domain-qa #task_ids-multiple-cho(...TRUNCATED) | [-0.0437714047729969,0.08017850667238235,-0.004367523826658726,0.03722572326660156,0.067490980029106(...TRUNCATED) |
69a8c7b33b9ae3281d93bdc34e85735b2ad4e662 | "\n# Dataset Card for air_dialogue\n\n## Table of Contents\n- [Dataset Description](#dataset-descrip(...TRUNCATED) | air_dialogue | ["task_categories:conversational","task_categories:text-generation","task_categories:fill-mask","tas(...TRUNCATED) | 2022-03-02T23:29:22+00:00 | "{\"annotations_creators\": [\"crowdsourced\"], \"language_creators\": [\"machine-generated\"], \"la(...TRUNCATED) | 2024-01-18T10:59:29+00:00 | [] | [
"en"
] | "TAGS\n#task_categories-conversational #task_categories-text-generation #task_categories-fill-mask #(...TRUNCATED) | "Dataset Card for air\\_dialogue\n==============================\n\n\nTable of Contents\n-----------(...TRUNCATED) | ["### Dataset Summary\n\n\nAirDialogue, is a large dataset that contains 402,038 goal-oriented conve(...TRUNCATED) | ["TAGS\n#task_categories-conversational #task_categories-text-generation #task_categories-fill-mask (...TRUNCATED) | [
152,
83,
85,
30,
90,
71,
30,
7,
4,
10,
10,
5,
120,
9,
26,
7,
8,
14,
26,
449,
18
] | ["passage: TAGS\n#task_categories-conversational #task_categories-text-generation #task_categories-f(...TRUNCATED) | [-0.030246928334236145,0.1698847860097885,-0.0039741569198668,0.07170038670301437,0.0370448417961597(...TRUNCATED) |
af3f2fa5462ac461b696cb300d66e07ad366057f | "\n# Dataset Card for Arabic Jordanian General Tweets\n\n## Table of Contents\n- [Dataset Card for A(...TRUNCATED) | ajgt_twitter_ar | ["task_categories:text-classification","task_ids:sentiment-classification","annotations_creators:fou(...TRUNCATED) | 2022-03-02T23:29:22+00:00 | "{\"annotations_creators\": [\"found\"], \"language_creators\": [\"found\"], \"language\": [\"ar\"],(...TRUNCATED) | 2024-01-09T11:58:01+00:00 | [] | [
"ar"
] | "TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators(...TRUNCATED) | "Dataset Card for Arabic Jordanian General Tweets\n================================================\(...TRUNCATED) | ["### Dataset Summary\n\n\nArabic Jordanian General Tweets (AJGT) Corpus consisted of 1,800 tweets a(...TRUNCATED) | ["TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #annotations_creator(...TRUNCATED) | [
86,
45,
19,
19,
19,
25,
18,
7,
4,
22,
14,
17,
5,
9,
18,
7,
8,
14,
6,
6,
25
] | ["passage: TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #annotation(...TRUNCATED) | [-0.0020078481175005436,0.1882128268480301,-0.005164021160453558,0.05199567228555679,0.0484087690711(...TRUNCATED) |
71593d1379934286885c53d147bc863ffe830745 | "\n# Dataset Card for [Dataset Name]\n\n## Table of Contents\n- [Dataset Description](#dataset-descr(...TRUNCATED) | allegro_reviews | ["task_categories:text-classification","task_ids:sentiment-scoring","task_ids:text-scoring","annotat(...TRUNCATED) | 2022-03-02T23:29:22+00:00 | "{\"annotations_creators\": [\"found\"], \"language_creators\": [\"found\"], \"language\": [\"pl\"],(...TRUNCATED) | 2024-01-09T11:59:39+00:00 | [] | [
"pl"
] | "TAGS\n#task_categories-text-classification #task_ids-sentiment-scoring #task_ids-text-scoring #anno(...TRUNCATED) | "\n# Dataset Card for [Dataset Name]\n\n## Table of Contents\n- Dataset Description\n - Dataset Sum(...TRUNCATED) | ["# Dataset Card for [Dataset Name]","## Table of Contents\n- Dataset Description\n - Dataset Summa(...TRUNCATED) | ["TAGS\n#task_categories-text-classification #task_ids-sentiment-scoring #task_ids-text-scoring #ann(...TRUNCATED) | [
100,
10,
120,
75,
110,
16,
6,
6,
37,
38,
17,
5,
26,
4,
31,
19,
5,
5,
9,
8,
8,
7,
8,
7,
5,
19,
150,
17
] | ["passage: TAGS\n#task_categories-text-classification #task_ids-sentiment-scoring #task_ids-text-sco(...TRUNCATED) | [-0.03407987579703331,0.18136045336723328,-0.006624291185289621,0.05663428455591202,0.05668739974498(...TRUNCATED) |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 43