sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
sequencelengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
sequencelengths
0
25
languages
sequencelengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
sequencelengths
0
352
processed_texts
sequencelengths
1
353
tokens_length
sequencelengths
1
353
input_texts
sequencelengths
1
40
15ef643450d589d5883e289ffadeb03563e80a9e
# Dataset Card for Acronym Identification Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://sites.google.com/view/sdu-aaai21/shared-task - **Repository:** https://github.com/amirveyseh/AAAI-21-SDU-shared-task-1-AI - **Paper:** [What Does This Acronym Mean? Introducing a New Dataset for Acronym Identification and Disambiguation](https://arxiv.org/pdf/2010.14678v1.pdf) - **Leaderboard:** https://competitions.codalab.org/competitions/26609 - **Point of Contact:** [More Information Needed] ### Dataset Summary This dataset contains the training, validation, and test data for the **Shared Task 1: Acronym Identification** of the AAAI-21 Workshop on Scientific Document Understanding. ### Supported Tasks and Leaderboards The dataset supports an `acronym-identification` task, where the aim is to predic which tokens in a pre-tokenized sentence correspond to acronyms. The dataset was released for a Shared Task which supported a [leaderboard](https://competitions.codalab.org/competitions/26609). ### Languages The sentences in the dataset are in English (`en`). ## Dataset Structure ### Data Instances A sample from the training set is provided below: ``` {'id': 'TR-0', 'labels': [4, 4, 4, 4, 0, 2, 2, 4, 1, 4, 4, 4, 4, 4, 4, 4, 4, 4], 'tokens': ['What', 'is', 'here', 'called', 'controlled', 'natural', 'language', '(', 'CNL', ')', 'has', 'traditionally', 'been', 'given', 'many', 'different', 'names', '.']} ``` Please note that in test set sentences only the `id` and `tokens` fields are available. `labels` can be ignored for test set. Labels in the test set are all `O` ### Data Fields The data instances have the following fields: - `id`: a `string` variable representing the example id, unique across the full dataset - `tokens`: a list of `string` variables representing the word-tokenized sentence - `labels`: a list of `categorical` variables with possible values `["B-long", "B-short", "I-long", "I-short", "O"]` corresponding to a BIO scheme. `-long` corresponds to the expanded acronym, such as *controlled natural language* here, and `-short` to the abbrviation, `CNL` here. ### Data Splits The training, validation, and test set contain `14,006`, `1,717`, and `1750` sentences respectively. ## Dataset Creation ### Curation Rationale > First, most of the existing datasets for acronym identification (AI) are either limited in their sizes or created using simple rule-based methods. > This is unfortunate as rules are in general not able to capture all the diverse forms to express acronyms and their long forms in text. > Second, most of the existing datasets are in the medical domain, ignoring the challenges in other scientific domains. > In order to address these limitations this paper introduces two new datasets for Acronym Identification. > Notably, our datasets are annotated by human to achieve high quality and have substantially larger numbers of examples than the existing AI datasets in the non-medical domain. ### Source Data #### Initial Data Collection and Normalization > In order to prepare a corpus for acronym annotation, we collect a corpus of 6,786 English papers from arXiv. > These papers consist of 2,031,592 sentences that would be used for data annotation for AI in this work. The dataset paper does not report the exact tokenization method. #### Who are the source language producers? The language was comes from papers hosted on the online digital archive [arXiv](https://arxiv.org/). No more information is available on the selection process or identity of the writers. ### Annotations #### Annotation process > Each sentence for annotation needs to contain at least one word in which more than half of the characters in are capital letters (i.e., acronym candidates). > Afterward, we search for a sub-sequence of words in which the concatenation of the first one, two or three characters of the words (in the order of the words in the sub-sequence could form an acronym candidate. > We call the sub-sequence a long form candidate. If we cannot find any long form candidate, we remove the sentence. > Using this process, we end up with 17,506 sentences to be annotated manually by the annotators from Amazon Mechanical Turk (MTurk). > In particular, we create a HIT for each sentence and ask the workers to annotate the short forms and the long forms in the sentence. > In case of disagreements, if two out of three workers agree on an annotation, we use majority voting to decide the correct annotation. > Otherwise, a fourth annotator is hired to resolve the conflict #### Who are the annotators? Workers were recruited through Amazon MEchanical Turk and paid $0.05 per annotation. No further demographic information is provided. ### Personal and Sensitive Information Papers published on arXiv are unlikely to contain much personal information, although some do include some poorly chosen examples revealing personal details, so the data should be used with care. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The dataset provided for this shared task is licensed under CC BY-NC-SA 4.0 international license. ### Citation Information ``` @inproceedings{Veyseh2020, author = {Amir Pouran Ben Veyseh and Franck Dernoncourt and Quan Hung Tran and Thien Huu Nguyen}, editor = {Donia Scott and N{\'{u}}ria Bel and Chengqing Zong}, title = {What Does This Acronym Mean? Introducing a New Dataset for Acronym Identification and Disambiguation}, booktitle = {Proceedings of the 28th International Conference on Computational Linguistics, {COLING} 2020, Barcelona, Spain (Online), December 8-13, 2020}, pages = {3285--3301}, publisher = {International Committee on Computational Linguistics}, year = {2020}, url = {https://doi.org/10.18653/v1/2020.coling-main.292}, doi = {10.18653/v1/2020.coling-main.292} } ``` ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
acronym_identification
[ "task_categories:token-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "acronym-identification", "arxiv:2010.14678", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": [], "paperswithcode_id": "acronym-identification", "pretty_name": "Acronym Identification Dataset", "tags": ["acronym-identification"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "labels", "sequence": {"class_label": {"names": {"0": "B-long", "1": "B-short", "2": "I-long", "3": "I-short", "4": "O"}}}}], "splits": [{"name": "train", "num_bytes": 7792771, "num_examples": 14006}, {"name": "validation", "num_bytes": 952689, "num_examples": 1717}, {"name": "test", "num_bytes": 987712, "num_examples": 1750}], "download_size": 2071007, "dataset_size": 9733172}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "train-eval-index": [{"config": "default", "task": "token-classification", "task_id": "entity_extraction", "splits": {"eval_split": "test"}, "col_mapping": {"tokens": "tokens", "labels": "tags"}}]}
2024-01-09T11:39:57+00:00
[ "2010.14678" ]
[ "en" ]
TAGS #task_categories-token-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #acronym-identification #arxiv-2010.14678 #region-us
# Dataset Card for Acronym Identification Dataset ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: What Does This Acronym Mean? Introducing a New Dataset for Acronym Identification and Disambiguation - Leaderboard: URL - Point of Contact: ### Dataset Summary This dataset contains the training, validation, and test data for the Shared Task 1: Acronym Identification of the AAAI-21 Workshop on Scientific Document Understanding. ### Supported Tasks and Leaderboards The dataset supports an 'acronym-identification' task, where the aim is to predic which tokens in a pre-tokenized sentence correspond to acronyms. The dataset was released for a Shared Task which supported a leaderboard. ### Languages The sentences in the dataset are in English ('en'). ## Dataset Structure ### Data Instances A sample from the training set is provided below: Please note that in test set sentences only the 'id' and 'tokens' fields are available. 'labels' can be ignored for test set. Labels in the test set are all 'O' ### Data Fields The data instances have the following fields: - 'id': a 'string' variable representing the example id, unique across the full dataset - 'tokens': a list of 'string' variables representing the word-tokenized sentence - 'labels': a list of 'categorical' variables with possible values '["B-long", "B-short", "I-long", "I-short", "O"]' corresponding to a BIO scheme. '-long' corresponds to the expanded acronym, such as *controlled natural language* here, and '-short' to the abbrviation, 'CNL' here. ### Data Splits The training, validation, and test set contain '14,006', '1,717', and '1750' sentences respectively. ## Dataset Creation ### Curation Rationale > First, most of the existing datasets for acronym identification (AI) are either limited in their sizes or created using simple rule-based methods. > This is unfortunate as rules are in general not able to capture all the diverse forms to express acronyms and their long forms in text. > Second, most of the existing datasets are in the medical domain, ignoring the challenges in other scientific domains. > In order to address these limitations this paper introduces two new datasets for Acronym Identification. > Notably, our datasets are annotated by human to achieve high quality and have substantially larger numbers of examples than the existing AI datasets in the non-medical domain. ### Source Data #### Initial Data Collection and Normalization > In order to prepare a corpus for acronym annotation, we collect a corpus of 6,786 English papers from arXiv. > These papers consist of 2,031,592 sentences that would be used for data annotation for AI in this work. The dataset paper does not report the exact tokenization method. #### Who are the source language producers? The language was comes from papers hosted on the online digital archive arXiv. No more information is available on the selection process or identity of the writers. ### Annotations #### Annotation process > Each sentence for annotation needs to contain at least one word in which more than half of the characters in are capital letters (i.e., acronym candidates). > Afterward, we search for a sub-sequence of words in which the concatenation of the first one, two or three characters of the words (in the order of the words in the sub-sequence could form an acronym candidate. > We call the sub-sequence a long form candidate. If we cannot find any long form candidate, we remove the sentence. > Using this process, we end up with 17,506 sentences to be annotated manually by the annotators from Amazon Mechanical Turk (MTurk). > In particular, we create a HIT for each sentence and ask the workers to annotate the short forms and the long forms in the sentence. > In case of disagreements, if two out of three workers agree on an annotation, we use majority voting to decide the correct annotation. > Otherwise, a fourth annotator is hired to resolve the conflict #### Who are the annotators? Workers were recruited through Amazon MEchanical Turk and paid $0.05 per annotation. No further demographic information is provided. ### Personal and Sensitive Information Papers published on arXiv are unlikely to contain much personal information, although some do include some poorly chosen examples revealing personal details, so the data should be used with care. ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators ### Licensing Information The dataset provided for this shared task is licensed under CC BY-NC-SA 4.0 international license. ### Contributions Thanks to @abhishekkrthakur for adding this dataset.
[ "# Dataset Card for Acronym Identification Dataset", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: What Does This Acronym Mean? Introducing a New Dataset for Acronym Identification and Disambiguation\n- Leaderboard: URL\n- Point of Contact:", "### Dataset Summary\n\nThis dataset contains the training, validation, and test data for the Shared Task 1: Acronym Identification of the AAAI-21 Workshop on Scientific Document Understanding.", "### Supported Tasks and Leaderboards\n\nThe dataset supports an 'acronym-identification' task, where the aim is to predic which tokens in a pre-tokenized sentence correspond to acronyms. The dataset was released for a Shared Task which supported a leaderboard.", "### Languages\n\nThe sentences in the dataset are in English ('en').", "## Dataset Structure", "### Data Instances\n\nA sample from the training set is provided below:\n\n\n\nPlease note that in test set sentences only the 'id' and 'tokens' fields are available. 'labels' can be ignored for test set. Labels in the test set are all 'O'", "### Data Fields\n\nThe data instances have the following fields:\n\n- 'id': a 'string' variable representing the example id, unique across the full dataset\n- 'tokens': a list of 'string' variables representing the word-tokenized sentence\n- 'labels': a list of 'categorical' variables with possible values '[\"B-long\", \"B-short\", \"I-long\", \"I-short\", \"O\"]' corresponding to a BIO scheme. '-long' corresponds to the expanded acronym, such as *controlled natural language* here, and '-short' to the abbrviation, 'CNL' here.", "### Data Splits\n\nThe training, validation, and test set contain '14,006', '1,717', and '1750' sentences respectively.", "## Dataset Creation", "### Curation Rationale\n\n> First, most of the existing datasets for acronym identification (AI) are either limited in their sizes or created using simple rule-based methods.\n> This is unfortunate as rules are in general not able to capture all the diverse forms to express acronyms and their long forms in text.\n> Second, most of the existing datasets are in the medical domain, ignoring the challenges in other scientific domains.\n> In order to address these limitations this paper introduces two new datasets for Acronym Identification.\n> Notably, our datasets are annotated by human to achieve high quality and have substantially larger numbers of examples than the existing AI datasets in the non-medical domain.", "### Source Data", "#### Initial Data Collection and Normalization\n\n> In order to prepare a corpus for acronym annotation, we collect a corpus of 6,786 English papers from arXiv.\n> These papers consist of 2,031,592 sentences that would be used for data annotation for AI in this work.\n\nThe dataset paper does not report the exact tokenization method.", "#### Who are the source language producers?\n\nThe language was comes from papers hosted on the online digital archive arXiv. No more information is available on the selection process or identity of the writers.", "### Annotations", "#### Annotation process\n\n> Each sentence for annotation needs to contain at least one word in which more than half of the characters in are capital letters (i.e., acronym candidates).\n> Afterward, we search for a sub-sequence of words in which the concatenation of the first one, two or three characters of the words (in the order of the words in the sub-sequence could form an acronym candidate.\n> We call the sub-sequence a long form candidate. If we cannot find any long form candidate, we remove the sentence.\n> Using this process, we end up with 17,506 sentences to be annotated manually by the annotators from Amazon Mechanical Turk (MTurk).\n> In particular, we create a HIT for each sentence and ask the workers to annotate the short forms and the long forms in the sentence.\n> In case of disagreements, if two out of three workers agree on an annotation, we use majority voting to decide the correct annotation.\n> Otherwise, a fourth annotator is hired to resolve the conflict", "#### Who are the annotators?\n\nWorkers were recruited through Amazon MEchanical Turk and paid $0.05 per annotation. No further demographic information is provided.", "### Personal and Sensitive Information\n\nPapers published on arXiv are unlikely to contain much personal information, although some do include some poorly chosen examples revealing personal details, so the data should be used with care.", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\nDataset provided for research purposes only. Please check dataset license for additional information.", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nThe dataset provided for this shared task is licensed under CC BY-NC-SA 4.0 international license.", "### Contributions\n\nThanks to @abhishekkrthakur for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #acronym-identification #arxiv-2010.14678 #region-us \n", "# Dataset Card for Acronym Identification Dataset", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: What Does This Acronym Mean? Introducing a New Dataset for Acronym Identification and Disambiguation\n- Leaderboard: URL\n- Point of Contact:", "### Dataset Summary\n\nThis dataset contains the training, validation, and test data for the Shared Task 1: Acronym Identification of the AAAI-21 Workshop on Scientific Document Understanding.", "### Supported Tasks and Leaderboards\n\nThe dataset supports an 'acronym-identification' task, where the aim is to predic which tokens in a pre-tokenized sentence correspond to acronyms. The dataset was released for a Shared Task which supported a leaderboard.", "### Languages\n\nThe sentences in the dataset are in English ('en').", "## Dataset Structure", "### Data Instances\n\nA sample from the training set is provided below:\n\n\n\nPlease note that in test set sentences only the 'id' and 'tokens' fields are available. 'labels' can be ignored for test set. Labels in the test set are all 'O'", "### Data Fields\n\nThe data instances have the following fields:\n\n- 'id': a 'string' variable representing the example id, unique across the full dataset\n- 'tokens': a list of 'string' variables representing the word-tokenized sentence\n- 'labels': a list of 'categorical' variables with possible values '[\"B-long\", \"B-short\", \"I-long\", \"I-short\", \"O\"]' corresponding to a BIO scheme. '-long' corresponds to the expanded acronym, such as *controlled natural language* here, and '-short' to the abbrviation, 'CNL' here.", "### Data Splits\n\nThe training, validation, and test set contain '14,006', '1,717', and '1750' sentences respectively.", "## Dataset Creation", "### Curation Rationale\n\n> First, most of the existing datasets for acronym identification (AI) are either limited in their sizes or created using simple rule-based methods.\n> This is unfortunate as rules are in general not able to capture all the diverse forms to express acronyms and their long forms in text.\n> Second, most of the existing datasets are in the medical domain, ignoring the challenges in other scientific domains.\n> In order to address these limitations this paper introduces two new datasets for Acronym Identification.\n> Notably, our datasets are annotated by human to achieve high quality and have substantially larger numbers of examples than the existing AI datasets in the non-medical domain.", "### Source Data", "#### Initial Data Collection and Normalization\n\n> In order to prepare a corpus for acronym annotation, we collect a corpus of 6,786 English papers from arXiv.\n> These papers consist of 2,031,592 sentences that would be used for data annotation for AI in this work.\n\nThe dataset paper does not report the exact tokenization method.", "#### Who are the source language producers?\n\nThe language was comes from papers hosted on the online digital archive arXiv. No more information is available on the selection process or identity of the writers.", "### Annotations", "#### Annotation process\n\n> Each sentence for annotation needs to contain at least one word in which more than half of the characters in are capital letters (i.e., acronym candidates).\n> Afterward, we search for a sub-sequence of words in which the concatenation of the first one, two or three characters of the words (in the order of the words in the sub-sequence could form an acronym candidate.\n> We call the sub-sequence a long form candidate. If we cannot find any long form candidate, we remove the sentence.\n> Using this process, we end up with 17,506 sentences to be annotated manually by the annotators from Amazon Mechanical Turk (MTurk).\n> In particular, we create a HIT for each sentence and ask the workers to annotate the short forms and the long forms in the sentence.\n> In case of disagreements, if two out of three workers agree on an annotation, we use majority voting to decide the correct annotation.\n> Otherwise, a fourth annotator is hired to resolve the conflict", "#### Who are the annotators?\n\nWorkers were recruited through Amazon MEchanical Turk and paid $0.05 per annotation. No further demographic information is provided.", "### Personal and Sensitive Information\n\nPapers published on arXiv are unlikely to contain much personal information, although some do include some poorly chosen examples revealing personal details, so the data should be used with care.", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\nDataset provided for research purposes only. Please check dataset license for additional information.", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nThe dataset provided for this shared task is licensed under CC BY-NC-SA 4.0 international license.", "### Contributions\n\nThanks to @abhishekkrthakur for adding this dataset." ]
[ 90, 12, 120, 53, 44, 69, 19, 6, 61, 155, 36, 5, 168, 4, 80, 46, 5, 246, 37, 50, 8, 7, 8, 25, 5, 6, 28, 20 ]
[ "passage: TAGS\n#task_categories-token-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #acronym-identification #arxiv-2010.14678 #region-us \n# Dataset Card for Acronym Identification Dataset## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: What Does This Acronym Mean? Introducing a New Dataset for Acronym Identification and Disambiguation\n- Leaderboard: URL\n- Point of Contact:### Dataset Summary\n\nThis dataset contains the training, validation, and test data for the Shared Task 1: Acronym Identification of the AAAI-21 Workshop on Scientific Document Understanding.### Supported Tasks and Leaderboards\n\nThe dataset supports an 'acronym-identification' task, where the aim is to predic which tokens in a pre-tokenized sentence correspond to acronyms. The dataset was released for a Shared Task which supported a leaderboard.### Languages\n\nThe sentences in the dataset are in English ('en').## Dataset Structure### Data Instances\n\nA sample from the training set is provided below:\n\n\n\nPlease note that in test set sentences only the 'id' and 'tokens' fields are available. 'labels' can be ignored for test set. Labels in the test set are all 'O'", "passage: ### Data Fields\n\nThe data instances have the following fields:\n\n- 'id': a 'string' variable representing the example id, unique across the full dataset\n- 'tokens': a list of 'string' variables representing the word-tokenized sentence\n- 'labels': a list of 'categorical' variables with possible values '[\"B-long\", \"B-short\", \"I-long\", \"I-short\", \"O\"]' corresponding to a BIO scheme. '-long' corresponds to the expanded acronym, such as *controlled natural language* here, and '-short' to the abbrviation, 'CNL' here.### Data Splits\n\nThe training, validation, and test set contain '14,006', '1,717', and '1750' sentences respectively.## Dataset Creation### Curation Rationale\n\n> First, most of the existing datasets for acronym identification (AI) are either limited in their sizes or created using simple rule-based methods.\n> This is unfortunate as rules are in general not able to capture all the diverse forms to express acronyms and their long forms in text.\n> Second, most of the existing datasets are in the medical domain, ignoring the challenges in other scientific domains.\n> In order to address these limitations this paper introduces two new datasets for Acronym Identification.\n> Notably, our datasets are annotated by human to achieve high quality and have substantially larger numbers of examples than the existing AI datasets in the non-medical domain.### Source Data#### Initial Data Collection and Normalization\n\n> In order to prepare a corpus for acronym annotation, we collect a corpus of 6,786 English papers from arXiv.\n> These papers consist of 2,031,592 sentences that would be used for data annotation for AI in this work.\n\nThe dataset paper does not report the exact tokenization method.#### Who are the source language producers?\n\nThe language was comes from papers hosted on the online digital archive arXiv. No more information is available on the selection process or identity of the writers.### Annotations" ]
4ba01c71687dd7c996597042449448ea312126cf
# Dataset Card for Adverse Drug Reaction Data v2 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.sciencedirect.com/science/article/pii/S1532046412000615 - **Repository:** [Needs More Information] - **Paper:** https://www.sciencedirect.com/science/article/pii/S1532046412000615 - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary ADE-Corpus-V2 Dataset: Adverse Drug Reaction Data. This is a dataset for Classification if a sentence is ADE-related (True) or not (False) and Relation Extraction between Adverse Drug Event and Drug. DRUG-AE.rel provides relations between drugs and adverse effects. DRUG-DOSE.rel provides relations between drugs and dosages. ADE-NEG.txt provides all sentences in the ADE corpus that DO NOT contain any drug-related adverse effects. ### Supported Tasks and Leaderboards Sentiment classification, Relation Extraction ### Languages English ## Dataset Structure ### Data Instances #### Config - `Ade_corpus_v2_classification` ``` { 'label': 1, 'text': 'Intravenous azithromycin-induced ototoxicity.' } ``` #### Config - `Ade_corpus_v2_drug_ade_relation` ``` { 'drug': 'azithromycin', 'effect': 'ototoxicity', 'indexes': { 'drug': { 'end_char': [24], 'start_char': [12] }, 'effect': { 'end_char': [44], 'start_char': [33] } }, 'text': 'Intravenous azithromycin-induced ototoxicity.' } ``` #### Config - `Ade_corpus_v2_drug_dosage_relation` ``` { 'dosage': '4 times per day', 'drug': 'insulin', 'indexes': { 'dosage': { 'end_char': [56], 'start_char': [41] }, 'drug': { 'end_char': [40], 'start_char': [33]} }, 'text': 'She continued to receive regular insulin 4 times per day over the following 3 years with only occasional hives.' } ``` ### Data Fields #### Config - `Ade_corpus_v2_classification` - `text` - Input text. - `label` - Whether the adverse drug effect(ADE) related (1) or not (0). - #### Config - `Ade_corpus_v2_drug_ade_relation` - `text` - Input text. - `drug` - Name of drug. - `effect` - Effect caused by the drug. - `indexes.drug.start_char` - Start index of `drug` string in text. - `indexes.drug.end_char` - End index of `drug` string in text. - `indexes.effect.start_char` - Start index of `effect` string in text. - `indexes.effect.end_char` - End index of `effect` string in text. #### Config - `Ade_corpus_v2_drug_dosage_relation` - `text` - Input text. - `drug` - Name of drug. - `dosage` - Dosage of the drug. - `indexes.drug.start_char` - Start index of `drug` string in text. - `indexes.drug.end_char` - End index of `drug` string in text. - `indexes.dosage.start_char` - Start index of `dosage` string in text. - `indexes.dosage.end_char` - End index of `dosage` string in text. ### Data Splits | Train | | ------ | | 23516 | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information ``` @article{GURULINGAPPA2012885, title = "Development of a benchmark corpus to support the automatic extraction of drug-related adverse effects from medical case reports", journal = "Journal of Biomedical Informatics", volume = "45", number = "5", pages = "885 - 892", year = "2012", note = "Text Mining and Natural Language Processing in Pharmacogenomics", issn = "1532-0464", doi = "https://doi.org/10.1016/j.jbi.2012.04.008", url = "http://www.sciencedirect.com/science/article/pii/S1532046412000615", author = "Harsha Gurulingappa and Abdul Mateen Rajput and Angus Roberts and Juliane Fluck and Martin Hofmann-Apitius and Luca Toldo", keywords = "Adverse drug effect, Benchmark corpus, Annotation, Harmonization, Sentence classification", abstract = "A significant amount of information about drug-related safety issues such as adverse effects are published in medical case reports that can only be explored by human readers due to their unstructured nature. The work presented here aims at generating a systematically annotated corpus that can support the development and validation of methods for the automatic extraction of drug-related adverse effects from medical case reports. The documents are systematically double annotated in various rounds to ensure consistent annotations. The annotated documents are finally harmonized to generate representative consensus annotations. In order to demonstrate an example use case scenario, the corpus was employed to train and validate models for the classification of informative against the non-informative sentences. A Maximum Entropy classifier trained with simple features and evaluated by 10-fold cross-validation resulted in the F1 score of 0.70 indicating a potential useful application of the corpus." } ``` ### Contributions Thanks to [@Nilanshrajput](https://github.com/Nilanshrajput), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
ade_corpus_v2
[ "task_categories:text-classification", "task_categories:token-classification", "task_ids:coreference-resolution", "task_ids:fact-checking", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "size_categories:1K<n<10K", "size_categories:n<1K", "source_datasets:original", "language:en", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K", "1K<n<10K", "n<1K"], "source_datasets": ["original"], "task_categories": ["text-classification", "token-classification"], "task_ids": ["coreference-resolution", "fact-checking"], "pretty_name": "Adverse Drug Reaction Data v2", "config_names": ["Ade_corpus_v2_classification", "Ade_corpus_v2_drug_ade_relation", "Ade_corpus_v2_drug_dosage_relation"], "dataset_info": [{"config_name": "Ade_corpus_v2_classification", "features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Not-Related", "1": "Related"}}}}], "splits": [{"name": "train", "num_bytes": 3403699, "num_examples": 23516}], "download_size": 1706476, "dataset_size": 3403699}, {"config_name": "Ade_corpus_v2_drug_ade_relation", "features": [{"name": "text", "dtype": "string"}, {"name": "drug", "dtype": "string"}, {"name": "effect", "dtype": "string"}, {"name": "indexes", "struct": [{"name": "drug", "sequence": [{"name": "start_char", "dtype": "int32"}, {"name": "end_char", "dtype": "int32"}]}, {"name": "effect", "sequence": [{"name": "start_char", "dtype": "int32"}, {"name": "end_char", "dtype": "int32"}]}]}], "splits": [{"name": "train", "num_bytes": 1545993, "num_examples": 6821}], "download_size": 491362, "dataset_size": 1545993}, {"config_name": "Ade_corpus_v2_drug_dosage_relation", "features": [{"name": "text", "dtype": "string"}, {"name": "drug", "dtype": "string"}, {"name": "dosage", "dtype": "string"}, {"name": "indexes", "struct": [{"name": "drug", "sequence": [{"name": "start_char", "dtype": "int32"}, {"name": "end_char", "dtype": "int32"}]}, {"name": "dosage", "sequence": [{"name": "start_char", "dtype": "int32"}, {"name": "end_char", "dtype": "int32"}]}]}], "splits": [{"name": "train", "num_bytes": 64697, "num_examples": 279}], "download_size": 33004, "dataset_size": 64697}], "configs": [{"config_name": "Ade_corpus_v2_classification", "data_files": [{"split": "train", "path": "Ade_corpus_v2_classification/train-*"}]}, {"config_name": "Ade_corpus_v2_drug_ade_relation", "data_files": [{"split": "train", "path": "Ade_corpus_v2_drug_ade_relation/train-*"}]}, {"config_name": "Ade_corpus_v2_drug_dosage_relation", "data_files": [{"split": "train", "path": "Ade_corpus_v2_drug_dosage_relation/train-*"}]}], "train-eval-index": [{"config": "Ade_corpus_v2_classification", "task": "text-classification", "task_id": "multi_class_classification", "splits": {"train_split": "train"}, "col_mapping": {"text": "text", "label": "target"}, "metrics": [{"type": "accuracy", "name": "Accuracy"}, {"type": "f1", "name": "F1 macro", "args": {"average": "macro"}}, {"type": "f1", "name": "F1 micro", "args": {"average": "micro"}}, {"type": "f1", "name": "F1 weighted", "args": {"average": "weighted"}}, {"type": "precision", "name": "Precision macro", "args": {"average": "macro"}}, {"type": "precision", "name": "Precision micro", "args": {"average": "micro"}}, {"type": "precision", "name": "Precision weighted", "args": {"average": "weighted"}}, {"type": "recall", "name": "Recall macro", "args": {"average": "macro"}}, {"type": "recall", "name": "Recall micro", "args": {"average": "micro"}}, {"type": "recall", "name": "Recall weighted", "args": {"average": "weighted"}}]}]}
2024-01-09T11:42:58+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_categories-token-classification #task_ids-coreference-resolution #task_ids-fact-checking #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #size_categories-1K<n<10K #size_categories-n<1K #source_datasets-original #language-English #license-unknown #region-us
Dataset Card for Adverse Drug Reaction Data v2 ============================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: URL * Leaderboard: * Point of Contact: ### Dataset Summary ADE-Corpus-V2 Dataset: Adverse Drug Reaction Data. This is a dataset for Classification if a sentence is ADE-related (True) or not (False) and Relation Extraction between Adverse Drug Event and Drug. URL provides relations between drugs and adverse effects. URL provides relations between drugs and dosages. URL provides all sentences in the ADE corpus that DO NOT contain any drug-related adverse effects. ### Supported Tasks and Leaderboards Sentiment classification, Relation Extraction ### Languages English Dataset Structure ----------------- ### Data Instances #### Config - 'Ade\_corpus\_v2\_classification' #### Config - 'Ade\_corpus\_v2\_drug\_ade\_relation' #### Config - 'Ade\_corpus\_v2\_drug\_dosage\_relation' ### Data Fields #### Config - 'Ade\_corpus\_v2\_classification' * 'text' - Input text. * 'label' - Whether the adverse drug effect(ADE) related (1) or not (0). * #### Config - 'Ade\_corpus\_v2\_drug\_ade\_relation' * 'text' - Input text. * 'drug' - Name of drug. * 'effect' - Effect caused by the drug. * 'URL.start\_char' - Start index of 'drug' string in text. * 'URL.end\_char' - End index of 'drug' string in text. * 'URL.start\_char' - Start index of 'effect' string in text. * 'URL.end\_char' - End index of 'effect' string in text. #### Config - 'Ade\_corpus\_v2\_drug\_dosage\_relation' * 'text' - Input text. * 'drug' - Name of drug. * 'dosage' - Dosage of the drug. * 'URL.start\_char' - Start index of 'drug' string in text. * 'URL.end\_char' - End index of 'drug' string in text. * 'URL.start\_char' - Start index of 'dosage' string in text. * 'URL.end\_char' - End index of 'dosage' string in text. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @Nilanshrajput, @lhoestq for adding this dataset.
[ "### Dataset Summary\n\n\nADE-Corpus-V2 Dataset: Adverse Drug Reaction Data.\nThis is a dataset for Classification if a sentence is ADE-related (True) or not (False) and Relation Extraction between Adverse Drug Event and Drug.\nURL provides relations between drugs and adverse effects.\nURL provides relations between drugs and dosages.\nURL provides all sentences in the ADE corpus that DO NOT contain any drug-related adverse effects.", "### Supported Tasks and Leaderboards\n\n\nSentiment classification, Relation Extraction", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### Config - 'Ade\\_corpus\\_v2\\_classification'", "#### Config - 'Ade\\_corpus\\_v2\\_drug\\_ade\\_relation'", "#### Config - 'Ade\\_corpus\\_v2\\_drug\\_dosage\\_relation'", "### Data Fields", "#### Config - 'Ade\\_corpus\\_v2\\_classification'\n\n\n* 'text' - Input text.\n* 'label' - Whether the adverse drug effect(ADE) related (1) or not (0).\n*", "#### Config - 'Ade\\_corpus\\_v2\\_drug\\_ade\\_relation'\n\n\n* 'text' - Input text.\n* 'drug' - Name of drug.\n* 'effect' - Effect caused by the drug.\n* 'URL.start\\_char' - Start index of 'drug' string in text.\n* 'URL.end\\_char' - End index of 'drug' string in text.\n* 'URL.start\\_char' - Start index of 'effect' string in text.\n* 'URL.end\\_char' - End index of 'effect' string in text.", "#### Config - 'Ade\\_corpus\\_v2\\_drug\\_dosage\\_relation'\n\n\n* 'text' - Input text.\n* 'drug' - Name of drug.\n* 'dosage' - Dosage of the drug.\n* 'URL.start\\_char' - Start index of 'drug' string in text.\n* 'URL.end\\_char' - End index of 'drug' string in text.\n* 'URL.start\\_char' - Start index of 'dosage' string in text.\n* 'URL.end\\_char' - End index of 'dosage' string in text.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @Nilanshrajput, @lhoestq for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_categories-token-classification #task_ids-coreference-resolution #task_ids-fact-checking #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #size_categories-1K<n<10K #size_categories-n<1K #source_datasets-original #language-English #license-unknown #region-us \n", "### Dataset Summary\n\n\nADE-Corpus-V2 Dataset: Adverse Drug Reaction Data.\nThis is a dataset for Classification if a sentence is ADE-related (True) or not (False) and Relation Extraction between Adverse Drug Event and Drug.\nURL provides relations between drugs and adverse effects.\nURL provides relations between drugs and dosages.\nURL provides all sentences in the ADE corpus that DO NOT contain any drug-related adverse effects.", "### Supported Tasks and Leaderboards\n\n\nSentiment classification, Relation Extraction", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### Config - 'Ade\\_corpus\\_v2\\_classification'", "#### Config - 'Ade\\_corpus\\_v2\\_drug\\_ade\\_relation'", "#### Config - 'Ade\\_corpus\\_v2\\_drug\\_dosage\\_relation'", "### Data Fields", "#### Config - 'Ade\\_corpus\\_v2\\_classification'\n\n\n* 'text' - Input text.\n* 'label' - Whether the adverse drug effect(ADE) related (1) or not (0).\n*", "#### Config - 'Ade\\_corpus\\_v2\\_drug\\_ade\\_relation'\n\n\n* 'text' - Input text.\n* 'drug' - Name of drug.\n* 'effect' - Effect caused by the drug.\n* 'URL.start\\_char' - Start index of 'drug' string in text.\n* 'URL.end\\_char' - End index of 'drug' string in text.\n* 'URL.start\\_char' - Start index of 'effect' string in text.\n* 'URL.end\\_char' - End index of 'effect' string in text.", "#### Config - 'Ade\\_corpus\\_v2\\_drug\\_dosage\\_relation'\n\n\n* 'text' - Input text.\n* 'drug' - Name of drug.\n* 'dosage' - Dosage of the drug.\n* 'URL.start\\_char' - Start index of 'drug' string in text.\n* 'URL.end\\_char' - End index of 'drug' string in text.\n* 'URL.start\\_char' - Start index of 'dosage' string in text.\n* 'URL.end\\_char' - End index of 'dosage' string in text.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @Nilanshrajput, @lhoestq for adding this dataset." ]
[ 132, 104, 18, 12, 6, 21, 27, 28, 5, 51, 136, 140, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 24 ]
[ "passage: TAGS\n#task_categories-text-classification #task_categories-token-classification #task_ids-coreference-resolution #task_ids-fact-checking #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #size_categories-1K<n<10K #size_categories-n<1K #source_datasets-original #language-English #license-unknown #region-us \n### Dataset Summary\n\n\nADE-Corpus-V2 Dataset: Adverse Drug Reaction Data.\nThis is a dataset for Classification if a sentence is ADE-related (True) or not (False) and Relation Extraction between Adverse Drug Event and Drug.\nURL provides relations between drugs and adverse effects.\nURL provides relations between drugs and dosages.\nURL provides all sentences in the ADE corpus that DO NOT contain any drug-related adverse effects.### Supported Tasks and Leaderboards\n\n\nSentiment classification, Relation Extraction### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------### Data Instances#### Config - 'Ade\\_corpus\\_v2\\_classification'#### Config - 'Ade\\_corpus\\_v2\\_drug\\_ade\\_relation'#### Config - 'Ade\\_corpus\\_v2\\_drug\\_dosage\\_relation'### Data Fields#### Config - 'Ade\\_corpus\\_v2\\_classification'\n\n\n* 'text' - Input text.\n* 'label' - Whether the adverse drug effect(ADE) related (1) or not (0).\n*" ]
c2d5f738db1ad21a4126a144dfbb00cb51e0a4a9
# Dataset Card for adversarialQA ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [adversarialQA homepage](https://adversarialqa.github.io/) - **Repository:** [adversarialQA repository](https://github.com/maxbartolo/adversarialQA) - **Paper:** [Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension](https://arxiv.org/abs/2002.00293) - **Leaderboard:** [Dynabench QA Round 1 Leaderboard](https://dynabench.org/tasks/2#overall) - **Point of Contact:** [Max Bartolo](max.bartolo@ucl.ac.uk) ### Dataset Summary We have created three new Reading Comprehension datasets constructed using an adversarial model-in-the-loop. We use three different models; BiDAF (Seo et al., 2016), BERTLarge (Devlin et al., 2018), and RoBERTaLarge (Liu et al., 2019) in the annotation loop and construct three datasets; D(BiDAF), D(BERT), and D(RoBERTa), each with 10,000 training examples, 1,000 validation, and 1,000 test examples. The adversarial human annotation paradigm ensures that these datasets consist of questions that current state-of-the-art models (at least the ones used as adversaries in the annotation loop) find challenging. The three AdversarialQA round 1 datasets provide a training and evaluation resource for such methods. ### Supported Tasks and Leaderboards `extractive-qa`: The dataset can be used to train a model for Extractive Question Answering, which consists in selecting the answer to a question from a passage. Success on this task is typically measured by achieving a high word-overlap [F1 score](https://huggingface.co/metrics/f1). The [RoBERTa-Large](https://huggingface.co/roberta-large) model trained on all the data combined with [SQuAD](https://arxiv.org/abs/1606.05250) currently achieves 64.35% F1. This task has an active leaderboard and is available as round 1 of the QA task on [Dynabench](https://dynabench.org/tasks/2#overall) and ranks models based on F1 score. ### Languages The text in the dataset is in English. The associated BCP-47 code is `en`. ## Dataset Structure ### Data Instances Data is provided in the same format as SQuAD 1.1. An example is shown below: ``` { "data": [ { "title": "Oxygen", "paragraphs": [ { "context": "Among the most important classes of organic compounds that contain oxygen are (where \"R\" is an organic group): alcohols (R-OH); ethers (R-O-R); ketones (R-CO-R); aldehydes (R-CO-H); carboxylic acids (R-COOH); esters (R-COO-R); acid anhydrides (R-CO-O-CO-R); and amides (R-C(O)-NR2). There are many important organic solvents that contain oxygen, including: acetone, methanol, ethanol, isopropanol, furan, THF, diethyl ether, dioxane, ethyl acetate, DMF, DMSO, acetic acid, and formic acid. Acetone ((CH3)2CO) and phenol (C6H5OH) are used as feeder materials in the synthesis of many different substances. Other important organic compounds that contain oxygen are: glycerol, formaldehyde, glutaraldehyde, citric acid, acetic anhydride, and acetamide. Epoxides are ethers in which the oxygen atom is part of a ring of three atoms.", "qas": [ { "id": "22bbe104aa72aa9b511dd53237deb11afa14d6e3", "question": "In addition to having oxygen, what do alcohols, ethers and esters have in common, according to the article?", "answers": [ { "answer_start": 36, "text": "organic compounds" } ] }, { "id": "4240a8e708c703796347a3702cf1463eed05584a", "question": "What letter does the abbreviation for acid anhydrides both begin and end in?", "answers": [ { "answer_start": 244, "text": "R" } ] }, { "id": "0681a0a5ec852ec6920d6a30f7ef65dced493366", "question": "Which of the organic compounds, in the article, contains nitrogen?", "answers": [ { "answer_start": 262, "text": "amides" } ] }, { "id": "2990efe1a56ccf81938fa5e18104f7d3803069fb", "question": "Which of the important classes of organic compounds, in the article, has a number in its abbreviation?", "answers": [ { "answer_start": 262, "text": "amides" } ] } ] } ] } ] } ``` ### Data Fields - title: the title of the Wikipedia page from which the context is sourced - context: the context/passage - id: a string identifier for each question - answers: a list of all provided answers (one per question in our case, but multiple may exist in SQuAD) with an `answer_start` field which is the character index of the start of the answer span, and a `text` field which is the answer text. Note that no answers are provided in the test set. Indeed, this dataset is part of the DynaBench benchmark, for which you can submit your predictions on the [website](https://dynabench.org/tasks/2#1). ### Data Splits The dataset is composed of three different datasets constructed using different models in the loop: BiDAF, BERT-Large, and RoBERTa-Large. Each of these has 10,000 training examples, 1,000 validation examples, and 1,000 test examples for a total of 30,000/3,000/3,000 train/validation/test examples. ## Dataset Creation ### Curation Rationale This dataset was collected to provide a more challenging and diverse Reading Comprehension dataset to state-of-the-art models. ### Source Data #### Initial Data Collection and Normalization The source passages are from Wikipedia and are the same as those used in [SQuAD v1.1](https://arxiv.org/abs/1606.05250). #### Who are the source language producers? The source language produces are Wikipedia editors for the passages, and human annotators on Mechanical Turk for the questions. ### Annotations #### Annotation process The dataset is collected through an adversarial human annotation process which pairs a human annotator and a reading comprehension model in an interactive setting. The human is presented with a passage for which they write a question and highlight the correct answer. The model then tries to answer the question, and, if it fails to answer correctly, the human wins. Otherwise, the human modifies or re-writes their question until the successfully fool the model. #### Who are the annotators? The annotators are from Amazon Mechanical Turk, geographically restricted the the USA, UK and Canada, having previously successfully completed at least 1,000 HITs, and having a HIT approval rate greater than 98%. Crowdworkers undergo intensive training and qualification prior to annotation. ### Personal and Sensitive Information No annotator identifying details are provided. ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to help develop better question answering systems. A system that succeeds at the supported task would be able to provide an accurate extractive answer from a short passage. This dataset is to be seen as a test bed for questions which contemporary state-of-the-art models struggle to answer correctly, thus often requiring more complex comprehension abilities than say detecting phrases explicitly mentioned in the passage with high overlap to the question. It should be noted, however, that the the source passages are both domain-restricted and linguistically specific, and that provided questions and answers do not constitute any particular social application. ### Discussion of Biases The dataset may exhibit various biases in terms of the source passage selection, annotated questions and answers, as well as algorithmic biases resulting from the adversarial annotation protocol. ### Other Known Limitations N/a ## Additional Information ### Dataset Curators This dataset was initially created by Max Bartolo, Alastair Roberts, Johannes Welbl, Sebastian Riedel, and Pontus Stenetorp, during work carried out at University College London (UCL). ### Licensing Information This dataset is distributed under [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/). ### Citation Information ``` @article{bartolo2020beat, author = {Bartolo, Max and Roberts, Alastair and Welbl, Johannes and Riedel, Sebastian and Stenetorp, Pontus}, title = {Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension}, journal = {Transactions of the Association for Computational Linguistics}, volume = {8}, number = {}, pages = {662-678}, year = {2020}, doi = {10.1162/tacl\_a\_00338}, URL = { https://doi.org/10.1162/tacl_a_00338 }, eprint = { https://doi.org/10.1162/tacl_a_00338 }, abstract = { Innovations in annotation methodology have been a catalyst for Reading Comprehension (RC) datasets and models. One recent trend to challenge current RC models is to involve a model in the annotation process: Humans create questions adversarially, such that the model fails to answer them correctly. In this work we investigate this annotation methodology and apply it in three different settings, collecting a total of 36,000 samples with progressively stronger models in the annotation loop. This allows us to explore questions such as the reproducibility of the adversarial effect, transfer from data collected with varying model-in-the-loop strengths, and generalization to data collected without a model. We find that training on adversarially collected samples leads to strong generalization to non-adversarially collected datasets, yet with progressive performance deterioration with increasingly stronger models-in-the-loop. Furthermore, we find that stronger models can still learn from datasets collected with substantially weaker models-in-the-loop. When trained on data collected with a BiDAF model in the loop, RoBERTa achieves 39.9F1 on questions that it cannot answer when trained on SQuAD—only marginally lower than when trained on data collected using RoBERTa itself (41.0F1). } } ``` ### Contributions Thanks to [@maxbartolo](https://github.com/maxbartolo) for adding this dataset.
UCLNLP/adversarial_qa
[ "task_categories:question-answering", "task_ids:extractive-qa", "task_ids:open-domain-qa", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "arxiv:2002.00293", "arxiv:1606.05250", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa", "open-domain-qa"], "paperswithcode_id": "adversarialqa", "pretty_name": "adversarialQA", "dataset_info": [{"config_name": "adversarialQA", "features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "metadata", "struct": [{"name": "split", "dtype": "string"}, {"name": "model_in_the_loop", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 27858686, "num_examples": 30000}, {"name": "validation", "num_bytes": 2757092, "num_examples": 3000}, {"name": "test", "num_bytes": 2919479, "num_examples": 3000}], "download_size": 5301049, "dataset_size": 33535257}, {"config_name": "dbert", "features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "metadata", "struct": [{"name": "split", "dtype": "string"}, {"name": "model_in_the_loop", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 9345521, "num_examples": 10000}, {"name": "validation", "num_bytes": 918156, "num_examples": 1000}, {"name": "test", "num_bytes": 971290, "num_examples": 1000}], "download_size": 2689032, "dataset_size": 11234967}, {"config_name": "dbidaf", "features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "metadata", "struct": [{"name": "split", "dtype": "string"}, {"name": "model_in_the_loop", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 9282482, "num_examples": 10000}, {"name": "validation", "num_bytes": 917907, "num_examples": 1000}, {"name": "test", "num_bytes": 946947, "num_examples": 1000}], "download_size": 2721341, "dataset_size": 11147336}, {"config_name": "droberta", "features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "metadata", "struct": [{"name": "split", "dtype": "string"}, {"name": "model_in_the_loop", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 9270683, "num_examples": 10000}, {"name": "validation", "num_bytes": 925029, "num_examples": 1000}, {"name": "test", "num_bytes": 1005242, "num_examples": 1000}], "download_size": 2815452, "dataset_size": 11200954}], "configs": [{"config_name": "adversarialQA", "data_files": [{"split": "train", "path": "adversarialQA/train-*"}, {"split": "validation", "path": "adversarialQA/validation-*"}, {"split": "test", "path": "adversarialQA/test-*"}]}, {"config_name": "dbert", "data_files": [{"split": "train", "path": "dbert/train-*"}, {"split": "validation", "path": "dbert/validation-*"}, {"split": "test", "path": "dbert/test-*"}]}, {"config_name": "dbidaf", "data_files": [{"split": "train", "path": "dbidaf/train-*"}, {"split": "validation", "path": "dbidaf/validation-*"}, {"split": "test", "path": "dbidaf/test-*"}]}, {"config_name": "droberta", "data_files": [{"split": "train", "path": "droberta/train-*"}, {"split": "validation", "path": "droberta/validation-*"}, {"split": "test", "path": "droberta/test-*"}]}], "train-eval-index": [{"config": "adversarialQA", "task": "question-answering", "task_id": "extractive_question_answering", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"question": "question", "context": "context", "answers": {"text": "text", "answer_start": "answer_start"}}, "metrics": [{"type": "squad", "name": "SQuAD"}]}]}
2023-12-21T14:20:00+00:00
[ "2002.00293", "1606.05250" ]
[ "en" ]
TAGS #task_categories-question-answering #task_ids-extractive-qa #task_ids-open-domain-qa #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-4.0 #arxiv-2002.00293 #arxiv-1606.05250 #region-us
# Dataset Card for adversarialQA ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: adversarialQA homepage - Repository: adversarialQA repository - Paper: Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension - Leaderboard: Dynabench QA Round 1 Leaderboard - Point of Contact: Max Bartolo ### Dataset Summary We have created three new Reading Comprehension datasets constructed using an adversarial model-in-the-loop. We use three different models; BiDAF (Seo et al., 2016), BERTLarge (Devlin et al., 2018), and RoBERTaLarge (Liu et al., 2019) in the annotation loop and construct three datasets; D(BiDAF), D(BERT), and D(RoBERTa), each with 10,000 training examples, 1,000 validation, and 1,000 test examples. The adversarial human annotation paradigm ensures that these datasets consist of questions that current state-of-the-art models (at least the ones used as adversaries in the annotation loop) find challenging. The three AdversarialQA round 1 datasets provide a training and evaluation resource for such methods. ### Supported Tasks and Leaderboards 'extractive-qa': The dataset can be used to train a model for Extractive Question Answering, which consists in selecting the answer to a question from a passage. Success on this task is typically measured by achieving a high word-overlap F1 score. The RoBERTa-Large model trained on all the data combined with SQuAD currently achieves 64.35% F1. This task has an active leaderboard and is available as round 1 of the QA task on Dynabench and ranks models based on F1 score. ### Languages The text in the dataset is in English. The associated BCP-47 code is 'en'. ## Dataset Structure ### Data Instances Data is provided in the same format as SQuAD 1.1. An example is shown below: ### Data Fields - title: the title of the Wikipedia page from which the context is sourced - context: the context/passage - id: a string identifier for each question - answers: a list of all provided answers (one per question in our case, but multiple may exist in SQuAD) with an 'answer_start' field which is the character index of the start of the answer span, and a 'text' field which is the answer text. Note that no answers are provided in the test set. Indeed, this dataset is part of the DynaBench benchmark, for which you can submit your predictions on the website. ### Data Splits The dataset is composed of three different datasets constructed using different models in the loop: BiDAF, BERT-Large, and RoBERTa-Large. Each of these has 10,000 training examples, 1,000 validation examples, and 1,000 test examples for a total of 30,000/3,000/3,000 train/validation/test examples. ## Dataset Creation ### Curation Rationale This dataset was collected to provide a more challenging and diverse Reading Comprehension dataset to state-of-the-art models. ### Source Data #### Initial Data Collection and Normalization The source passages are from Wikipedia and are the same as those used in SQuAD v1.1. #### Who are the source language producers? The source language produces are Wikipedia editors for the passages, and human annotators on Mechanical Turk for the questions. ### Annotations #### Annotation process The dataset is collected through an adversarial human annotation process which pairs a human annotator and a reading comprehension model in an interactive setting. The human is presented with a passage for which they write a question and highlight the correct answer. The model then tries to answer the question, and, if it fails to answer correctly, the human wins. Otherwise, the human modifies or re-writes their question until the successfully fool the model. #### Who are the annotators? The annotators are from Amazon Mechanical Turk, geographically restricted the the USA, UK and Canada, having previously successfully completed at least 1,000 HITs, and having a HIT approval rate greater than 98%. Crowdworkers undergo intensive training and qualification prior to annotation. ### Personal and Sensitive Information No annotator identifying details are provided. ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to help develop better question answering systems. A system that succeeds at the supported task would be able to provide an accurate extractive answer from a short passage. This dataset is to be seen as a test bed for questions which contemporary state-of-the-art models struggle to answer correctly, thus often requiring more complex comprehension abilities than say detecting phrases explicitly mentioned in the passage with high overlap to the question. It should be noted, however, that the the source passages are both domain-restricted and linguistically specific, and that provided questions and answers do not constitute any particular social application. ### Discussion of Biases The dataset may exhibit various biases in terms of the source passage selection, annotated questions and answers, as well as algorithmic biases resulting from the adversarial annotation protocol. ### Other Known Limitations N/a ## Additional Information ### Dataset Curators This dataset was initially created by Max Bartolo, Alastair Roberts, Johannes Welbl, Sebastian Riedel, and Pontus Stenetorp, during work carried out at University College London (UCL). ### Licensing Information This dataset is distributed under CC BY-SA 3.0. ### Contributions Thanks to @maxbartolo for adding this dataset.
[ "# Dataset Card for adversarialQA", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: adversarialQA homepage\n- Repository: adversarialQA repository\n- Paper: Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension\n- Leaderboard: Dynabench QA Round 1 Leaderboard\n- Point of Contact: Max Bartolo", "### Dataset Summary\n\nWe have created three new Reading Comprehension datasets constructed using an adversarial model-in-the-loop.\n\nWe use three different models; BiDAF (Seo et al., 2016), BERTLarge (Devlin et al., 2018), and RoBERTaLarge (Liu et al., 2019) in the annotation loop and construct three datasets; D(BiDAF), D(BERT), and D(RoBERTa), each with 10,000 training examples, 1,000 validation, and 1,000 test examples.\n\nThe adversarial human annotation paradigm ensures that these datasets consist of questions that current state-of-the-art models (at least the ones used as adversaries in the annotation loop) find challenging. The three AdversarialQA round 1 datasets provide a training and evaluation resource for such methods.", "### Supported Tasks and Leaderboards\n\n'extractive-qa': The dataset can be used to train a model for Extractive Question Answering, which consists in selecting the answer to a question from a passage. Success on this task is typically measured by achieving a high word-overlap F1 score. The RoBERTa-Large model trained on all the data combined with SQuAD currently achieves 64.35% F1. This task has an active leaderboard and is available as round 1 of the QA task on Dynabench and ranks models based on F1 score.", "### Languages\n\nThe text in the dataset is in English. The associated BCP-47 code is 'en'.", "## Dataset Structure", "### Data Instances\n\nData is provided in the same format as SQuAD 1.1. An example is shown below:", "### Data Fields\n\n- title: the title of the Wikipedia page from which the context is sourced\n- context: the context/passage\n- id: a string identifier for each question\n- answers: a list of all provided answers (one per question in our case, but multiple may exist in SQuAD) with an 'answer_start' field which is the character index of the start of the answer span, and a 'text' field which is the answer text.\n\nNote that no answers are provided in the test set. Indeed, this dataset is part of the DynaBench benchmark, for which you can submit your predictions on the website.", "### Data Splits\n\nThe dataset is composed of three different datasets constructed using different models in the loop: BiDAF, BERT-Large, and RoBERTa-Large. Each of these has 10,000 training examples, 1,000 validation examples, and 1,000 test examples for a total of 30,000/3,000/3,000 train/validation/test examples.", "## Dataset Creation", "### Curation Rationale\n\nThis dataset was collected to provide a more challenging and diverse Reading Comprehension dataset to state-of-the-art models.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe source passages are from Wikipedia and are the same as those used in SQuAD v1.1.", "#### Who are the source language producers?\n\nThe source language produces are Wikipedia editors for the passages, and human annotators on Mechanical Turk for the questions.", "### Annotations", "#### Annotation process\n\nThe dataset is collected through an adversarial human annotation process which pairs a human annotator and a reading comprehension model in an interactive setting. The human is presented with a passage for which they write a question and highlight the correct answer. The model then tries to answer the question, and, if it fails to answer correctly, the human wins. Otherwise, the human modifies or re-writes their question until the successfully fool the model.", "#### Who are the annotators?\n\nThe annotators are from Amazon Mechanical Turk, geographically restricted the the USA, UK and Canada, having previously successfully completed at least 1,000 HITs, and having a HIT approval rate greater than 98%. Crowdworkers undergo intensive training and qualification prior to annotation.", "### Personal and Sensitive Information\n\nNo annotator identifying details are provided.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe purpose of this dataset is to help develop better question answering systems.\n\nA system that succeeds at the supported task would be able to provide an accurate extractive answer from a short passage. This dataset is to be seen as a test bed for questions which contemporary state-of-the-art models struggle to answer correctly, thus often requiring more complex comprehension abilities than say detecting phrases explicitly mentioned in the passage with high overlap to the question.\n\nIt should be noted, however, that the the source passages are both domain-restricted and linguistically specific, and that provided questions and answers do not constitute any particular social application.", "### Discussion of Biases\n\nThe dataset may exhibit various biases in terms of the source passage selection, annotated questions and answers, as well as algorithmic biases resulting from the adversarial annotation protocol.", "### Other Known Limitations\n\nN/a", "## Additional Information", "### Dataset Curators\n\nThis dataset was initially created by Max Bartolo, Alastair Roberts, Johannes Welbl, Sebastian Riedel, and Pontus Stenetorp, during work carried out at University College London (UCL).", "### Licensing Information\n\nThis dataset is distributed under CC BY-SA 3.0.", "### Contributions\n\nThanks to @maxbartolo for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_ids-extractive-qa #task_ids-open-domain-qa #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-4.0 #arxiv-2002.00293 #arxiv-1606.05250 #region-us \n", "# Dataset Card for adversarialQA", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: adversarialQA homepage\n- Repository: adversarialQA repository\n- Paper: Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension\n- Leaderboard: Dynabench QA Round 1 Leaderboard\n- Point of Contact: Max Bartolo", "### Dataset Summary\n\nWe have created three new Reading Comprehension datasets constructed using an adversarial model-in-the-loop.\n\nWe use three different models; BiDAF (Seo et al., 2016), BERTLarge (Devlin et al., 2018), and RoBERTaLarge (Liu et al., 2019) in the annotation loop and construct three datasets; D(BiDAF), D(BERT), and D(RoBERTa), each with 10,000 training examples, 1,000 validation, and 1,000 test examples.\n\nThe adversarial human annotation paradigm ensures that these datasets consist of questions that current state-of-the-art models (at least the ones used as adversaries in the annotation loop) find challenging. The three AdversarialQA round 1 datasets provide a training and evaluation resource for such methods.", "### Supported Tasks and Leaderboards\n\n'extractive-qa': The dataset can be used to train a model for Extractive Question Answering, which consists in selecting the answer to a question from a passage. Success on this task is typically measured by achieving a high word-overlap F1 score. The RoBERTa-Large model trained on all the data combined with SQuAD currently achieves 64.35% F1. This task has an active leaderboard and is available as round 1 of the QA task on Dynabench and ranks models based on F1 score.", "### Languages\n\nThe text in the dataset is in English. The associated BCP-47 code is 'en'.", "## Dataset Structure", "### Data Instances\n\nData is provided in the same format as SQuAD 1.1. An example is shown below:", "### Data Fields\n\n- title: the title of the Wikipedia page from which the context is sourced\n- context: the context/passage\n- id: a string identifier for each question\n- answers: a list of all provided answers (one per question in our case, but multiple may exist in SQuAD) with an 'answer_start' field which is the character index of the start of the answer span, and a 'text' field which is the answer text.\n\nNote that no answers are provided in the test set. Indeed, this dataset is part of the DynaBench benchmark, for which you can submit your predictions on the website.", "### Data Splits\n\nThe dataset is composed of three different datasets constructed using different models in the loop: BiDAF, BERT-Large, and RoBERTa-Large. Each of these has 10,000 training examples, 1,000 validation examples, and 1,000 test examples for a total of 30,000/3,000/3,000 train/validation/test examples.", "## Dataset Creation", "### Curation Rationale\n\nThis dataset was collected to provide a more challenging and diverse Reading Comprehension dataset to state-of-the-art models.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe source passages are from Wikipedia and are the same as those used in SQuAD v1.1.", "#### Who are the source language producers?\n\nThe source language produces are Wikipedia editors for the passages, and human annotators on Mechanical Turk for the questions.", "### Annotations", "#### Annotation process\n\nThe dataset is collected through an adversarial human annotation process which pairs a human annotator and a reading comprehension model in an interactive setting. The human is presented with a passage for which they write a question and highlight the correct answer. The model then tries to answer the question, and, if it fails to answer correctly, the human wins. Otherwise, the human modifies or re-writes their question until the successfully fool the model.", "#### Who are the annotators?\n\nThe annotators are from Amazon Mechanical Turk, geographically restricted the the USA, UK and Canada, having previously successfully completed at least 1,000 HITs, and having a HIT approval rate greater than 98%. Crowdworkers undergo intensive training and qualification prior to annotation.", "### Personal and Sensitive Information\n\nNo annotator identifying details are provided.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe purpose of this dataset is to help develop better question answering systems.\n\nA system that succeeds at the supported task would be able to provide an accurate extractive answer from a short passage. This dataset is to be seen as a test bed for questions which contemporary state-of-the-art models struggle to answer correctly, thus often requiring more complex comprehension abilities than say detecting phrases explicitly mentioned in the passage with high overlap to the question.\n\nIt should be noted, however, that the the source passages are both domain-restricted and linguistically specific, and that provided questions and answers do not constitute any particular social application.", "### Discussion of Biases\n\nThe dataset may exhibit various biases in terms of the source passage selection, annotated questions and answers, as well as algorithmic biases resulting from the adversarial annotation protocol.", "### Other Known Limitations\n\nN/a", "## Additional Information", "### Dataset Curators\n\nThis dataset was initially created by Max Bartolo, Alastair Roberts, Johannes Welbl, Sebastian Riedel, and Pontus Stenetorp, during work carried out at University College London (UCL).", "### Licensing Information\n\nThis dataset is distributed under CC BY-SA 3.0.", "### Contributions\n\nThanks to @maxbartolo for adding this dataset." ]
[ 121, 8, 120, 64, 191, 134, 25, 6, 24, 141, 84, 5, 36, 4, 31, 37, 5, 107, 73, 18, 8, 154, 52, 10, 5, 50, 19, 17 ]
[ "passage: TAGS\n#task_categories-question-answering #task_ids-extractive-qa #task_ids-open-domain-qa #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-4.0 #arxiv-2002.00293 #arxiv-1606.05250 #region-us \n# Dataset Card for adversarialQA## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: adversarialQA homepage\n- Repository: adversarialQA repository\n- Paper: Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension\n- Leaderboard: Dynabench QA Round 1 Leaderboard\n- Point of Contact: Max Bartolo### Dataset Summary\n\nWe have created three new Reading Comprehension datasets constructed using an adversarial model-in-the-loop.\n\nWe use three different models; BiDAF (Seo et al., 2016), BERTLarge (Devlin et al., 2018), and RoBERTaLarge (Liu et al., 2019) in the annotation loop and construct three datasets; D(BiDAF), D(BERT), and D(RoBERTa), each with 10,000 training examples, 1,000 validation, and 1,000 test examples.\n\nThe adversarial human annotation paradigm ensures that these datasets consist of questions that current state-of-the-art models (at least the ones used as adversaries in the annotation loop) find challenging. The three AdversarialQA round 1 datasets provide a training and evaluation resource for such methods.", "passage: ### Supported Tasks and Leaderboards\n\n'extractive-qa': The dataset can be used to train a model for Extractive Question Answering, which consists in selecting the answer to a question from a passage. Success on this task is typically measured by achieving a high word-overlap F1 score. The RoBERTa-Large model trained on all the data combined with SQuAD currently achieves 64.35% F1. This task has an active leaderboard and is available as round 1 of the QA task on Dynabench and ranks models based on F1 score.### Languages\n\nThe text in the dataset is in English. The associated BCP-47 code is 'en'.## Dataset Structure### Data Instances\n\nData is provided in the same format as SQuAD 1.1. An example is shown below:### Data Fields\n\n- title: the title of the Wikipedia page from which the context is sourced\n- context: the context/passage\n- id: a string identifier for each question\n- answers: a list of all provided answers (one per question in our case, but multiple may exist in SQuAD) with an 'answer_start' field which is the character index of the start of the answer span, and a 'text' field which is the answer text.\n\nNote that no answers are provided in the test set. Indeed, this dataset is part of the DynaBench benchmark, for which you can submit your predictions on the website.### Data Splits\n\nThe dataset is composed of three different datasets constructed using different models in the loop: BiDAF, BERT-Large, and RoBERTa-Large. Each of these has 10,000 training examples, 1,000 validation examples, and 1,000 test examples for a total of 30,000/3,000/3,000 train/validation/test examples.## Dataset Creation### Curation Rationale\n\nThis dataset was collected to provide a more challenging and diverse Reading Comprehension dataset to state-of-the-art models.### Source Data#### Initial Data Collection and Normalization\n\nThe source passages are from Wikipedia and are the same as those used in SQuAD v1.1.#### Who are the source language producers?\n\nThe source language produces are Wikipedia editors for the passages, and human annotators on Mechanical Turk for the questions.### Annotations#### Annotation process\n\nThe dataset is collected through an adversarial human annotation process which pairs a human annotator and a reading comprehension model in an interactive setting. The human is presented with a passage for which they write a question and highlight the correct answer. The model then tries to answer the question, and, if it fails to answer correctly, the human wins. Otherwise, the human modifies or re-writes their question until the successfully fool the model." ]
2305f2e63b68056f9b9037a3805c8c196e0d5581
# Dataset Card for "aeslc" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** https://github.com/ryanzhumich/AESLC - **Paper:** [This Email Could Save Your Life: Introducing the Task of Email Subject Line Generation](https://arxiv.org/abs/1906.03497) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 11.64 MB - **Size of the generated dataset:** 14.95 MB - **Total amount of disk used:** 26.59 MB ### Dataset Summary A collection of email messages of employees in the Enron Corporation. There are two features: - email_body: email body text. - subject_line: email subject text. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages Monolingual English (mainly en-US) with some exceptions. ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 11.64 MB - **Size of the generated dataset:** 14.95 MB - **Total amount of disk used:** 26.59 MB An example of 'train' looks as follows. ``` { "email_body": "B/C\n<<some doc>>\n", "subject_line": "Service Agreement" } ``` ### Data Fields The data fields are the same among all splits. #### default - `email_body`: a `string` feature. - `subject_line`: a `string` feature. ### Data Splits | name |train|validation|test| |-------|----:|---------:|---:| |default|14436| 1960|1906| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{zhang-tetreault-2019-email, title = "This Email Could Save Your Life: Introducing the Task of Email Subject Line Generation", author = "Zhang, Rui and Tetreault, Joel", booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P19-1043", doi = "10.18653/v1/P19-1043", pages = "446--456", } ``` ### Contributions Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun) for adding this dataset.
aeslc
[ "task_categories:summarization", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "aspect-based-summarization", "conversations-summarization", "multi-document-summarization", "email-headline-generation", "arxiv:1906.03497", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["summarization"], "task_ids": [], "paperswithcode_id": "aeslc", "pretty_name": "AESLC: Annotated Enron Subject Line Corpus", "tags": ["aspect-based-summarization", "conversations-summarization", "multi-document-summarization", "email-headline-generation"], "dataset_info": {"features": [{"name": "email_body", "dtype": "string"}, {"name": "subject_line", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11897245, "num_examples": 14436}, {"name": "validation", "num_bytes": 1659987, "num_examples": 1960}, {"name": "test", "num_bytes": 1383452, "num_examples": 1906}], "download_size": 7948020, "dataset_size": 14940684}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}]}
2024-01-09T11:49:13+00:00
[ "1906.03497" ]
[ "en" ]
TAGS #task_categories-summarization #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #aspect-based-summarization #conversations-summarization #multi-document-summarization #email-headline-generation #arxiv-1906.03497 #region-us
Dataset Card for "aeslc" ======================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: * Repository: URL * Paper: This Email Could Save Your Life: Introducing the Task of Email Subject Line Generation * Point of Contact: * Size of downloaded dataset files: 11.64 MB * Size of the generated dataset: 14.95 MB * Total amount of disk used: 26.59 MB ### Dataset Summary A collection of email messages of employees in the Enron Corporation. There are two features: * email\_body: email body text. * subject\_line: email subject text. ### Supported Tasks and Leaderboards ### Languages Monolingual English (mainly en-US) with some exceptions. Dataset Structure ----------------- ### Data Instances #### default * Size of downloaded dataset files: 11.64 MB * Size of the generated dataset: 14.95 MB * Total amount of disk used: 26.59 MB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### default * 'email\_body': a 'string' feature. * 'subject\_line': a 'string' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @patrickvonplaten, @thomwolf, @lewtun for adding this dataset.
[ "### Dataset Summary\n\n\nA collection of email messages of employees in the Enron Corporation.\n\n\nThere are two features:\n\n\n* email\\_body: email body text.\n* subject\\_line: email subject text.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nMonolingual English (mainly en-US) with some exceptions.\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 11.64 MB\n* Size of the generated dataset: 14.95 MB\n* Total amount of disk used: 26.59 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'email\\_body': a 'string' feature.\n* 'subject\\_line': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @patrickvonplaten, @thomwolf, @lewtun for adding this dataset." ]
[ "TAGS\n#task_categories-summarization #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #aspect-based-summarization #conversations-summarization #multi-document-summarization #email-headline-generation #arxiv-1906.03497 #region-us \n", "### Dataset Summary\n\n\nA collection of email messages of employees in the Enron Corporation.\n\n\nThere are two features:\n\n\n* email\\_body: email body text.\n* subject\\_line: email subject text.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nMonolingual English (mainly en-US) with some exceptions.\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 11.64 MB\n* Size of the generated dataset: 14.95 MB\n* Total amount of disk used: 26.59 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'email\\_body': a 'string' feature.\n* 'subject\\_line': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @patrickvonplaten, @thomwolf, @lewtun for adding this dataset." ]
[ 116, 44, 10, 27, 6, 49, 17, 32, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 28 ]
[ "passage: TAGS\n#task_categories-summarization #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #aspect-based-summarization #conversations-summarization #multi-document-summarization #email-headline-generation #arxiv-1906.03497 #region-us \n### Dataset Summary\n\n\nA collection of email messages of employees in the Enron Corporation.\n\n\nThere are two features:\n\n\n* email\\_body: email body text.\n* subject\\_line: email subject text.### Supported Tasks and Leaderboards### Languages\n\n\nMonolingual English (mainly en-US) with some exceptions.\n\n\nDataset Structure\n-----------------### Data Instances#### default\n\n\n* Size of downloaded dataset files: 11.64 MB\n* Size of the generated dataset: 14.95 MB\n* Total amount of disk used: 26.59 MB\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### default\n\n\n* 'email\\_body': a 'string' feature.\n* 'subject\\_line': a 'string' feature.### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators### Licensing Information### Contributions\n\n\nThanks to @patrickvonplaten, @thomwolf, @lewtun for adding this dataset." ]
445834a997dce8b40e1d108638064381de80c497
# Dataset Card for Afrikaans Ner Corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Afrikaans Ner Corpus Homepage](https://repo.sadilar.org/handle/20.500.12185/299) - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** [Martin Puttkammer](mailto:Martin.Puttkammer@nwu.ac.za) ### Dataset Summary The Afrikaans Ner Corpus is an Afrikaans dataset developed by [The Centre for Text Technology (CTexT), North-West University, South Africa](http://humanities.nwu.ac.za/ctext). The data is based on documents from the South African goverment domain and crawled from gov.za websites. It was created to support NER task for Afrikaans language. The dataset uses CoNLL shared task annotation standards. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The language supported is Afrikaans. ## Dataset Structure ### Data Instances A data point consists of sentences seperated by empty line and tab-seperated tokens and tags. {'id': '0', 'ner_tags': [0, 0, 0, 0, 0], 'tokens': ['Vertaling', 'van', 'die', 'inligting', 'in'] } ### Data Fields - `id`: id of the sample - `tokens`: the tokens of the example text - `ner_tags`: the NER tags of each token The NER tags correspond to this list: ``` "OUT", "B-PERS", "I-PERS", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-MISC", "I-MISC", ``` The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC). (OUT) is used for tokens not considered part of any named entity. ### Data Splits The data was not split. ## Dataset Creation ### Curation Rationale The data was created to help introduce resources to new language - Afrikaans. [More Information Needed] ### Source Data #### Initial Data Collection and Normalization The data is based on South African government domain and was crawled from gov.za websites. [More Information Needed] #### Who are the source language producers? The data was produced by writers of South African government websites - gov.za [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? The data was annotated during the NCHLT text resource development project. [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The annotated data sets were developed by the Centre for Text Technology (CTexT, North-West University, South Africa). See: [more information](http://www.nwu.ac.za/ctext) ### Licensing Information The data is under the [Creative Commons Attribution 2.5 South Africa License](http://creativecommons.org/licenses/by/2.5/za/legalcode) ### Citation Information ``` @inproceedings{afrikaans_ner_corpus, author = { Gerhard van Huyssteen and Martin Puttkammer and E.B. Trollip and J.C. Liversage and Roald Eiselen}, title = {NCHLT Afrikaans Named Entity Annotated Corpus}, booktitle = {Eiselen, R. 2016. Government domain named entity recognition for South African languages. Proceedings of the 10th Language Resource and Evaluation Conference, Portorož, Slovenia.}, year = {2016}, url = {https://repo.sadilar.org/handle/20.500.12185/299}, } ``` ### Contributions Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset.
afrikaans_ner_corpus
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:af", "license:other", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["af"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "Afrikaans Ner Corpus", "license_details": "Creative Commons Attribution 2.5 South Africa License", "dataset_info": {"config_name": "afrikaans_ner_corpus", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "OUT", "1": "B-PERS", "2": "I-PERS", "3": "B-ORG", "4": "I-ORG", "5": "B-LOC", "6": "I-LOC", "7": "B-MISC", "8": "I-MISC"}}}}], "splits": [{"name": "train", "num_bytes": 4025651, "num_examples": 8962}], "download_size": 944804, "dataset_size": 4025651}, "configs": [{"config_name": "afrikaans_ner_corpus", "data_files": [{"split": "train", "path": "afrikaans_ner_corpus/train-*"}], "default": true}]}
2024-01-09T11:51:47+00:00
[]
[ "af" ]
TAGS #task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Afrikaans #license-other #region-us
# Dataset Card for Afrikaans Ner Corpus ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: Afrikaans Ner Corpus Homepage - Repository: - Paper: - Leaderboard: - Point of Contact: Martin Puttkammer ### Dataset Summary The Afrikaans Ner Corpus is an Afrikaans dataset developed by The Centre for Text Technology (CTexT), North-West University, South Africa. The data is based on documents from the South African goverment domain and crawled from URL websites. It was created to support NER task for Afrikaans language. The dataset uses CoNLL shared task annotation standards. ### Supported Tasks and Leaderboards ### Languages The language supported is Afrikaans. ## Dataset Structure ### Data Instances A data point consists of sentences seperated by empty line and tab-seperated tokens and tags. {'id': '0', 'ner_tags': [0, 0, 0, 0, 0], 'tokens': ['Vertaling', 'van', 'die', 'inligting', 'in'] } ### Data Fields - 'id': id of the sample - 'tokens': the tokens of the example text - 'ner_tags': the NER tags of each token The NER tags correspond to this list: The NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC). (OUT) is used for tokens not considered part of any named entity. ### Data Splits The data was not split. ## Dataset Creation ### Curation Rationale The data was created to help introduce resources to new language - Afrikaans. ### Source Data #### Initial Data Collection and Normalization The data is based on South African government domain and was crawled from URL websites. #### Who are the source language producers? The data was produced by writers of South African government websites - URL ### Annotations #### Annotation process #### Who are the annotators? The data was annotated during the NCHLT text resource development project. ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators The annotated data sets were developed by the Centre for Text Technology (CTexT, North-West University, South Africa). See: more information ### Licensing Information The data is under the Creative Commons Attribution 2.5 South Africa License ### Contributions Thanks to @yvonnegitau for adding this dataset.
[ "# Dataset Card for Afrikaans Ner Corpus", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Afrikaans Ner Corpus Homepage\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact: Martin Puttkammer", "### Dataset Summary\nThe Afrikaans Ner Corpus is an Afrikaans dataset developed by The Centre for Text Technology (CTexT), North-West University, South Africa. The data is based on documents from the South African goverment domain and crawled from URL websites. It was created to support NER task for Afrikaans language. The dataset uses CoNLL shared task annotation standards.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe language supported is Afrikaans.", "## Dataset Structure", "### Data Instances\n\nA data point consists of sentences seperated by empty line and tab-seperated tokens and tags. \n{'id': '0',\n 'ner_tags': [0, 0, 0, 0, 0],\n 'tokens': ['Vertaling', 'van', 'die', 'inligting', 'in']\n}", "### Data Fields\n\n- 'id': id of the sample\n- 'tokens': the tokens of the example text\n- 'ner_tags': the NER tags of each token\n\nThe NER tags correspond to this list:\n\nThe NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC). (OUT) is used for tokens not considered part of any named entity.", "### Data Splits\n\nThe data was not split.", "## Dataset Creation", "### Curation Rationale\n\nThe data was created to help introduce resources to new language - Afrikaans.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe data is based on South African government domain and was crawled from URL websites.", "#### Who are the source language producers?\n\nThe data was produced by writers of South African government websites - URL", "### Annotations", "#### Annotation process", "#### Who are the annotators?\n\nThe data was annotated during the NCHLT text resource development project.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThe annotated data sets were developed by the Centre for Text Technology (CTexT, North-West University, South Africa).\n\nSee: more information", "### Licensing Information\n\nThe data is under the Creative Commons Attribution 2.5 South Africa License", "### Contributions\n\nThanks to @yvonnegitau for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Afrikaans #license-other #region-us \n", "# Dataset Card for Afrikaans Ner Corpus", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Afrikaans Ner Corpus Homepage\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact: Martin Puttkammer", "### Dataset Summary\nThe Afrikaans Ner Corpus is an Afrikaans dataset developed by The Centre for Text Technology (CTexT), North-West University, South Africa. The data is based on documents from the South African goverment domain and crawled from URL websites. It was created to support NER task for Afrikaans language. The dataset uses CoNLL shared task annotation standards.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe language supported is Afrikaans.", "## Dataset Structure", "### Data Instances\n\nA data point consists of sentences seperated by empty line and tab-seperated tokens and tags. \n{'id': '0',\n 'ner_tags': [0, 0, 0, 0, 0],\n 'tokens': ['Vertaling', 'van', 'die', 'inligting', 'in']\n}", "### Data Fields\n\n- 'id': id of the sample\n- 'tokens': the tokens of the example text\n- 'ner_tags': the NER tags of each token\n\nThe NER tags correspond to this list:\n\nThe NER tags have the same format as in the CoNLL shared task: a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and miscellaneous names (MISC). (OUT) is used for tokens not considered part of any named entity.", "### Data Splits\n\nThe data was not split.", "## Dataset Creation", "### Curation Rationale\n\nThe data was created to help introduce resources to new language - Afrikaans.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe data is based on South African government domain and was crawled from URL websites.", "#### Who are the source language producers?\n\nThe data was produced by writers of South African government websites - URL", "### Annotations", "#### Annotation process", "#### Who are the annotators?\n\nThe data was annotated during the NCHLT text resource development project.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThe annotated data sets were developed by the Centre for Text Technology (CTexT, North-West University, South Africa).\n\nSee: more information", "### Licensing Information\n\nThe data is under the Creative Commons Attribution 2.5 South Africa License", "### Contributions\n\nThanks to @yvonnegitau for adding this dataset." ]
[ 95, 8, 120, 32, 83, 10, 11, 6, 89, 141, 11, 5, 21, 4, 27, 25, 5, 5, 25, 8, 8, 7, 8, 7, 5, 38, 18, 18 ]
[ "passage: TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Afrikaans #license-other #region-us \n# Dataset Card for Afrikaans Ner Corpus## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: Afrikaans Ner Corpus Homepage\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact: Martin Puttkammer### Dataset Summary\nThe Afrikaans Ner Corpus is an Afrikaans dataset developed by The Centre for Text Technology (CTexT), North-West University, South Africa. The data is based on documents from the South African goverment domain and crawled from URL websites. It was created to support NER task for Afrikaans language. The dataset uses CoNLL shared task annotation standards.### Supported Tasks and Leaderboards### Languages\n\nThe language supported is Afrikaans.## Dataset Structure### Data Instances\n\nA data point consists of sentences seperated by empty line and tab-seperated tokens and tags. \n{'id': '0',\n 'ner_tags': [0, 0, 0, 0, 0],\n 'tokens': ['Vertaling', 'van', 'die', 'inligting', 'in']\n}" ]
68a83b6cd4730be5e0ecbdbee941eef8f13aa867
# Dataset Card for "ag_news" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html](http://groups.di.unipi.it/~gulli/AG_corpus_of_news_articles.html) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 31.33 MB - **Size of the generated dataset:** 31.70 MB - **Total amount of disk used:** 63.02 MB ### Dataset Summary AG is a collection of more than 1 million news articles. News articles have been gathered from more than 2000 news sources by ComeToMyHead in more than 1 year of activity. ComeToMyHead is an academic news search engine which has been running since July, 2004. The dataset is provided by the academic comunity for research purposes in data mining (clustering, classification, etc), information retrieval (ranking, search, etc), xml, data compression, data streaming, and any other non-commercial activity. For more information, please refer to the link http://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html . The AG's news topic classification dataset is constructed by Xiang Zhang (xiang.zhang@nyu.edu) from the dataset above. It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015). ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 31.33 MB - **Size of the generated dataset:** 31.70 MB - **Total amount of disk used:** 63.02 MB An example of 'train' looks as follows. ``` { "label": 3, "text": "New iPad released Just like every other September, this one is no different. Apple is planning to release a bigger, heavier, fatter iPad that..." } ``` ### Data Fields The data fields are the same among all splits. #### default - `text`: a `string` feature. - `label`: a classification label, with possible values including `World` (0), `Sports` (1), `Business` (2), `Sci/Tech` (3). ### Data Splits | name |train |test| |-------|-----:|---:| |default|120000|7600| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{Zhang2015CharacterlevelCN, title={Character-level Convolutional Networks for Text Classification}, author={Xiang Zhang and Junbo Jake Zhao and Yann LeCun}, booktitle={NIPS}, year={2015} } ``` ### Contributions Thanks to [@jxmorris12](https://github.com/jxmorris12), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@lewtun](https://github.com/lewtun) for adding this dataset.
ag_news
[ "task_categories:text-classification", "task_ids:topic-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["topic-classification"], "paperswithcode_id": "ag-news", "pretty_name": "AG\u2019s News Corpus", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "World", "1": "Sports", "2": "Business", "3": "Sci/Tech"}}}}], "splits": [{"name": "train", "num_bytes": 29817351, "num_examples": 120000}, {"name": "test", "num_bytes": 1879478, "num_examples": 7600}], "download_size": 31327765, "dataset_size": 31696829}, "train-eval-index": [{"config": "default", "task": "text-classification", "task_id": "multi_class_classification", "splits": {"train_split": "train", "eval_split": "test"}, "col_mapping": {"text": "text", "label": "target"}, "metrics": [{"type": "accuracy", "name": "Accuracy"}, {"type": "f1", "name": "F1 macro", "args": {"average": "macro"}}, {"type": "f1", "name": "F1 micro", "args": {"average": "micro"}}, {"type": "f1", "name": "F1 weighted", "args": {"average": "weighted"}}, {"type": "precision", "name": "Precision macro", "args": {"average": "macro"}}, {"type": "precision", "name": "Precision micro", "args": {"average": "micro"}}, {"type": "precision", "name": "Precision weighted", "args": {"average": "weighted"}}, {"type": "recall", "name": "Recall macro", "args": {"average": "macro"}}, {"type": "recall", "name": "Recall micro", "args": {"average": "micro"}}, {"type": "recall", "name": "Recall weighted", "args": {"average": "weighted"}}]}]}
2024-01-18T10:52:09+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-topic-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-unknown #region-us
Dataset Card for "ag\_news" =========================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: * Point of Contact: * Size of downloaded dataset files: 31.33 MB * Size of the generated dataset: 31.70 MB * Total amount of disk used: 63.02 MB ### Dataset Summary AG is a collection of more than 1 million news articles. News articles have been gathered from more than 2000 news sources by ComeToMyHead in more than 1 year of activity. ComeToMyHead is an academic news search engine which has been running since July, 2004. The dataset is provided by the academic comunity for research purposes in data mining (clustering, classification, etc), information retrieval (ranking, search, etc), xml, data compression, data streaming, and any other non-commercial activity. For more information, please refer to the link URL . The AG's news topic classification dataset is constructed by Xiang Zhang (URL@URL) from the dataset above. It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015). ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### default * Size of downloaded dataset files: 31.33 MB * Size of the generated dataset: 31.70 MB * Total amount of disk used: 63.02 MB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### default * 'text': a 'string' feature. * 'label': a classification label, with possible values including 'World' (0), 'Sports' (1), 'Business' (2), 'Sci/Tech' (3). ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @jxmorris12, @thomwolf, @lhoestq, @lewtun for adding this dataset.
[ "### Dataset Summary\n\n\nAG is a collection of more than 1 million news articles. News articles have been\ngathered from more than 2000 news sources by ComeToMyHead in more than 1 year of\nactivity. ComeToMyHead is an academic news search engine which has been running\nsince July, 2004. The dataset is provided by the academic comunity for research\npurposes in data mining (clustering, classification, etc), information retrieval\n(ranking, search, etc), xml, data compression, data streaming, and any other\nnon-commercial activity. For more information, please refer to the link\nURL .\n\n\nThe AG's news topic classification dataset is constructed by Xiang Zhang\n(URL@URL) from the dataset above. It is used as a text\nclassification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann\nLeCun. Character-level Convolutional Networks for Text Classification. Advances\nin Neural Information Processing Systems 28 (NIPS 2015).", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 31.33 MB\n* Size of the generated dataset: 31.70 MB\n* Total amount of disk used: 63.02 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'text': a 'string' feature.\n* 'label': a classification label, with possible values including 'World' (0), 'Sports' (1), 'Business' (2), 'Sci/Tech' (3).", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @jxmorris12, @thomwolf, @lhoestq, @lewtun for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-topic-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-unknown #region-us \n", "### Dataset Summary\n\n\nAG is a collection of more than 1 million news articles. News articles have been\ngathered from more than 2000 news sources by ComeToMyHead in more than 1 year of\nactivity. ComeToMyHead is an academic news search engine which has been running\nsince July, 2004. The dataset is provided by the academic comunity for research\npurposes in data mining (clustering, classification, etc), information retrieval\n(ranking, search, etc), xml, data compression, data streaming, and any other\nnon-commercial activity. For more information, please refer to the link\nURL .\n\n\nThe AG's news topic classification dataset is constructed by Xiang Zhang\n(URL@URL) from the dataset above. It is used as a text\nclassification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann\nLeCun. Character-level Convolutional Networks for Text Classification. Advances\nin Neural Information Processing Systems 28 (NIPS 2015).", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 31.33 MB\n* Size of the generated dataset: 31.70 MB\n* Total amount of disk used: 63.02 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'text': a 'string' feature.\n* 'label': a classification label, with possible values including 'World' (0), 'Sports' (1), 'Business' (2), 'Sci/Tech' (3).", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @jxmorris12, @thomwolf, @lhoestq, @lewtun for adding this dataset." ]
[ 84, 226, 10, 11, 6, 50, 17, 52, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 33 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-topic-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-unknown #region-us \n### Dataset Summary\n\n\nAG is a collection of more than 1 million news articles. News articles have been\ngathered from more than 2000 news sources by ComeToMyHead in more than 1 year of\nactivity. ComeToMyHead is an academic news search engine which has been running\nsince July, 2004. The dataset is provided by the academic comunity for research\npurposes in data mining (clustering, classification, etc), information retrieval\n(ranking, search, etc), xml, data compression, data streaming, and any other\nnon-commercial activity. For more information, please refer to the link\nURL .\n\n\nThe AG's news topic classification dataset is constructed by Xiang Zhang\n(URL@URL) from the dataset above. It is used as a text\nclassification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann\nLeCun. Character-level Convolutional Networks for Text Classification. Advances\nin Neural Information Processing Systems 28 (NIPS 2015).### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### default\n\n\n* Size of downloaded dataset files: 31.33 MB\n* Size of the generated dataset: 31.70 MB\n* Total amount of disk used: 63.02 MB\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### default\n\n\n* 'text': a 'string' feature.\n* 'label': a classification label, with possible values including 'World' (0), 'Sports' (1), 'Business' (2), 'Sci/Tech' (3).### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations" ]
210d026faf9955653af8916fad021475a3f00453
# Dataset Card for "ai2_arc" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://allenai.org/data/arc](https://allenai.org/data/arc) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge](https://arxiv.org/abs/1803.05457) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 1361.68 MB - **Size of the generated dataset:** 2.28 MB - **Total amount of disk used:** 1363.96 MB ### Dataset Summary A new dataset of 7,787 genuine grade-school level, multiple-choice science questions, assembled to encourage research in advanced question-answering. The dataset is partitioned into a Challenge Set and an Easy Set, where the former contains only questions answered incorrectly by both a retrieval-based algorithm and a word co-occurrence algorithm. We are also including a corpus of over 14 million science sentences relevant to the task, and an implementation of three neural baseline models for this dataset. We pose ARC as a challenge to the community. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### ARC-Challenge - **Size of downloaded dataset files:** 680.84 MB - **Size of the generated dataset:** 0.83 MB - **Total amount of disk used:** 681.67 MB An example of 'train' looks as follows. ``` { "answerKey": "B", "choices": { "label": ["A", "B", "C", "D"], "text": ["Shady areas increased.", "Food sources increased.", "Oxygen levels increased.", "Available water increased."] }, "id": "Mercury_SC_405487", "question": "One year, the oak trees in a park began producing more acorns than usual. The next year, the population of chipmunks in the park also increased. Which best explains why there were more chipmunks the next year?" } ``` #### ARC-Easy - **Size of downloaded dataset files:** 680.84 MB - **Size of the generated dataset:** 1.45 MB - **Total amount of disk used:** 682.29 MB An example of 'train' looks as follows. ``` { "answerKey": "B", "choices": { "label": ["A", "B", "C", "D"], "text": ["Shady areas increased.", "Food sources increased.", "Oxygen levels increased.", "Available water increased."] }, "id": "Mercury_SC_405487", "question": "One year, the oak trees in a park began producing more acorns than usual. The next year, the population of chipmunks in the park also increased. Which best explains why there were more chipmunks the next year?" } ``` ### Data Fields The data fields are the same among all splits. #### ARC-Challenge - `id`: a `string` feature. - `question`: a `string` feature. - `choices`: a dictionary feature containing: - `text`: a `string` feature. - `label`: a `string` feature. - `answerKey`: a `string` feature. #### ARC-Easy - `id`: a `string` feature. - `question`: a `string` feature. - `choices`: a dictionary feature containing: - `text`: a `string` feature. - `label`: a `string` feature. - `answerKey`: a `string` feature. ### Data Splits | name |train|validation|test| |-------------|----:|---------:|---:| |ARC-Challenge| 1119| 299|1172| |ARC-Easy | 2251| 570|2376| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @article{allenai:arc, author = {Peter Clark and Isaac Cowhey and Oren Etzioni and Tushar Khot and Ashish Sabharwal and Carissa Schoenick and Oyvind Tafjord}, title = {Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge}, journal = {arXiv:1803.05457v1}, year = {2018}, } ``` ### Contributions Thanks to [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
allenai/ai2_arc
[ "task_categories:question-answering", "task_ids:open-domain-qa", "task_ids:multiple-choice-qa", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "arxiv:1803.05457", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["open-domain-qa", "multiple-choice-qa"], "pretty_name": "Ai2Arc", "language_bcp47": ["en-US"], "dataset_info": [{"config_name": "ARC-Challenge", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "choices", "sequence": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}]}, {"name": "answerKey", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 349760, "num_examples": 1119}, {"name": "test", "num_bytes": 375511, "num_examples": 1172}, {"name": "validation", "num_bytes": 96660, "num_examples": 299}], "download_size": 449460, "dataset_size": 821931}, {"config_name": "ARC-Easy", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "choices", "sequence": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}]}, {"name": "answerKey", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 619000, "num_examples": 2251}, {"name": "test", "num_bytes": 657514, "num_examples": 2376}, {"name": "validation", "num_bytes": 157394, "num_examples": 570}], "download_size": 762935, "dataset_size": 1433908}], "configs": [{"config_name": "ARC-Challenge", "data_files": [{"split": "train", "path": "ARC-Challenge/train-*"}, {"split": "test", "path": "ARC-Challenge/test-*"}, {"split": "validation", "path": "ARC-Challenge/validation-*"}]}, {"config_name": "ARC-Easy", "data_files": [{"split": "train", "path": "ARC-Easy/train-*"}, {"split": "test", "path": "ARC-Easy/test-*"}, {"split": "validation", "path": "ARC-Easy/validation-*"}]}]}
2023-12-21T15:09:48+00:00
[ "1803.05457" ]
[ "en" ]
TAGS #task_categories-question-answering #task_ids-open-domain-qa #task_ids-multiple-choice-qa #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-sa-4.0 #arxiv-1803.05457 #region-us
Dataset Card for "ai2\_arc" =========================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge * Point of Contact: * Size of downloaded dataset files: 1361.68 MB * Size of the generated dataset: 2.28 MB * Total amount of disk used: 1363.96 MB ### Dataset Summary A new dataset of 7,787 genuine grade-school level, multiple-choice science questions, assembled to encourage research in advanced question-answering. The dataset is partitioned into a Challenge Set and an Easy Set, where the former contains only questions answered incorrectly by both a retrieval-based algorithm and a word co-occurrence algorithm. We are also including a corpus of over 14 million science sentences relevant to the task, and an implementation of three neural baseline models for this dataset. We pose ARC as a challenge to the community. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### ARC-Challenge * Size of downloaded dataset files: 680.84 MB * Size of the generated dataset: 0.83 MB * Total amount of disk used: 681.67 MB An example of 'train' looks as follows. #### ARC-Easy * Size of downloaded dataset files: 680.84 MB * Size of the generated dataset: 1.45 MB * Total amount of disk used: 682.29 MB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### ARC-Challenge * 'id': a 'string' feature. * 'question': a 'string' feature. * 'choices': a dictionary feature containing: + 'text': a 'string' feature. + 'label': a 'string' feature. * 'answerKey': a 'string' feature. #### ARC-Easy * 'id': a 'string' feature. * 'question': a 'string' feature. * 'choices': a dictionary feature containing: + 'text': a 'string' feature. + 'label': a 'string' feature. * 'answerKey': a 'string' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @lewtun, @patrickvonplaten, @thomwolf for adding this dataset.
[ "### Dataset Summary\n\n\nA new dataset of 7,787 genuine grade-school level, multiple-choice science questions, assembled to encourage research in\nadvanced question-answering. The dataset is partitioned into a Challenge Set and an Easy Set, where the former contains\nonly questions answered incorrectly by both a retrieval-based algorithm and a word co-occurrence algorithm. We are also\nincluding a corpus of over 14 million science sentences relevant to the task, and an implementation of three neural baseline models for this dataset. We pose ARC as a challenge to the community.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### ARC-Challenge\n\n\n* Size of downloaded dataset files: 680.84 MB\n* Size of the generated dataset: 0.83 MB\n* Total amount of disk used: 681.67 MB\n\n\nAn example of 'train' looks as follows.", "#### ARC-Easy\n\n\n* Size of downloaded dataset files: 680.84 MB\n* Size of the generated dataset: 1.45 MB\n* Total amount of disk used: 682.29 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### ARC-Challenge\n\n\n* 'id': a 'string' feature.\n* 'question': a 'string' feature.\n* 'choices': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'label': a 'string' feature.\n* 'answerKey': a 'string' feature.", "#### ARC-Easy\n\n\n* 'id': a 'string' feature.\n* 'question': a 'string' feature.\n* 'choices': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'label': a 'string' feature.\n* 'answerKey': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @lewtun, @patrickvonplaten, @thomwolf for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_ids-open-domain-qa #task_ids-multiple-choice-qa #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-sa-4.0 #arxiv-1803.05457 #region-us \n", "### Dataset Summary\n\n\nA new dataset of 7,787 genuine grade-school level, multiple-choice science questions, assembled to encourage research in\nadvanced question-answering. The dataset is partitioned into a Challenge Set and an Easy Set, where the former contains\nonly questions answered incorrectly by both a retrieval-based algorithm and a word co-occurrence algorithm. We are also\nincluding a corpus of over 14 million science sentences relevant to the task, and an implementation of three neural baseline models for this dataset. We pose ARC as a challenge to the community.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### ARC-Challenge\n\n\n* Size of downloaded dataset files: 680.84 MB\n* Size of the generated dataset: 0.83 MB\n* Total amount of disk used: 681.67 MB\n\n\nAn example of 'train' looks as follows.", "#### ARC-Easy\n\n\n* Size of downloaded dataset files: 680.84 MB\n* Size of the generated dataset: 1.45 MB\n* Total amount of disk used: 682.29 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### ARC-Challenge\n\n\n* 'id': a 'string' feature.\n* 'question': a 'string' feature.\n* 'choices': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'label': a 'string' feature.\n* 'answerKey': a 'string' feature.", "#### ARC-Easy\n\n\n* 'id': a 'string' feature.\n* 'question': a 'string' feature.\n* 'choices': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'label': a 'string' feature.\n* 'answerKey': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @lewtun, @patrickvonplaten, @thomwolf for adding this dataset." ]
[ 113, 130, 10, 11, 6, 58, 58, 17, 81, 81, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 28 ]
[ "passage: TAGS\n#task_categories-question-answering #task_ids-open-domain-qa #task_ids-multiple-choice-qa #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-sa-4.0 #arxiv-1803.05457 #region-us \n### Dataset Summary\n\n\nA new dataset of 7,787 genuine grade-school level, multiple-choice science questions, assembled to encourage research in\nadvanced question-answering. The dataset is partitioned into a Challenge Set and an Easy Set, where the former contains\nonly questions answered incorrectly by both a retrieval-based algorithm and a word co-occurrence algorithm. We are also\nincluding a corpus of over 14 million science sentences relevant to the task, and an implementation of three neural baseline models for this dataset. We pose ARC as a challenge to the community.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### ARC-Challenge\n\n\n* Size of downloaded dataset files: 680.84 MB\n* Size of the generated dataset: 0.83 MB\n* Total amount of disk used: 681.67 MB\n\n\nAn example of 'train' looks as follows.#### ARC-Easy\n\n\n* Size of downloaded dataset files: 680.84 MB\n* Size of the generated dataset: 1.45 MB\n* Total amount of disk used: 682.29 MB\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### ARC-Challenge\n\n\n* 'id': a 'string' feature.\n* 'question': a 'string' feature.\n* 'choices': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'label': a 'string' feature.\n* 'answerKey': a 'string' feature." ]
69a8c7b33b9ae3281d93bdc34e85735b2ad4e662
# Dataset Card for air_dialogue ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://worksheets.codalab.org/worksheets/0xa79833f4b3c24f4188cee7131b120a59 - **Repository:** https://github.com/google/airdialogue - **Paper:** https://www.aclweb.org/anthology/D18-1419/ - **Leaderboard:** https://worksheets.codalab.org/worksheets/0xa79833f4b3c24f4188cee7131b120a59 - **Point of Contact:** [AirDialogue-Google](mailto:airdialogue@gmail.com) [Aakash Gupta](mailto:aakashg80@gmail.com) ### Dataset Summary AirDialogue, is a large dataset that contains 402,038 goal-oriented conversations. To collect this dataset, we create a contextgenerator which provides travel and flight restrictions. Then the human annotators are asked to play the role of a customer or an agent and interact with the goal of successfully booking a trip given the restrictions. ### Supported Tasks and Leaderboards We use perplexity and BLEU score to evaluate the quality of the language generated by the model. We also compare the dialogue state generated by the model s and the ground truth state s0. Two categories of the metrics are used: exact match scores and scaled scores The inference competition & leaderboard can be found here: https://worksheets.codalab.org/worksheets/0xa79833f4b3c24f4188cee7131b120a59 ### Languages The text in the dataset is in English. The BCP 47 code is `en` ## Dataset Structure ### Data Instances The data is provided in two set of files. The first one has the dialogues (`air_dialogue_data`) and the knowledge-base (`air_dialogue_kb`) BuilderConfig: `air_dialogue_data` ``` {"action": {"status": "book", "name": "Emily Edwards", "flight": [1027]}, "intent": {"return_month": "June", "return_day": "14", "max_price": 200, "departure_airport": "DFW", "return_time": "afternoon", "max_connections": 1, "departure_day": "12", "goal": "book", "departure_month": "June", "name": "Emily Edwards", "return_airport": "IAD"}, "timestamps": [1519233239, 1519233244, 1519233249, 1519233252, 1519233333, 1519233374, 1519233392, 1519233416, 1519233443, 1519233448, 1519233464, 1519233513, 1519233525, 1519233540, 1519233626, 1519233628, 1519233638], "dialogue": ["customer: Hello.", "agent: Hello.", "customer: My name is Emily Edwards.", "agent: How may I help you out?", "customer: I need some help in my flight ticket reservation to attend a convocation meeting, can you please help me?", "agent: Sure, I will help you out. May I know your travelling dates please?", "customer: Thank you and my dates are 06/12 and back on 06/14.", "agent: Can I know your airport codes?", "customer: The airport codes are from DFW to IAD.", "agent: Ok, please wait a moment.", "customer: Sure.", "agent: There is a flight with connection 1 and price 200, can I proceed with this flight?", "customer: Yes, do proceed with booking.", "agent: Ok, your ticket has been booked.", "customer: Thank you for your assistance in my flight ticket reservation.", "agent: Thank you for choosing us.", "customer: You are welcome."], "expected_action": {"status": "book", "name": "Emily Edwards", "flight": [1027]}, "correct_sample": true} ``` BuilderConfig: `air_dialogue_kb` ``` {"kb": [{"return_airport": "DTW", "airline": "Spirit", "departure_day": "12", "departure_airport": "IAD", "flight_number": 1000, "departure_month": "June", "departure_time_num": 17, "class": "economy", "return_time_num": 2, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 200}, {"return_airport": "DTW", "airline": "Frontier", "departure_day": "12", "departure_airport": "IAD", "flight_number": 1001, "departure_month": "June", "departure_time_num": 0, "class": "business", "return_time_num": 15, "return_month": "June", "return_day": "13", "num_connections": 0, "price": 500}, {"return_airport": "DTW", "airline": "JetBlue", "departure_day": "12", "departure_airport": "IAD", "flight_number": 1002, "departure_month": "June", "departure_time_num": 0, "class": "business", "return_time_num": 13, "return_month": "June", "return_day": "13", "num_connections": 1, "price": 600}, {"return_airport": "IAD", "airline": "Hawaiian", "departure_day": "12", "departure_airport": "DTW", "flight_number": 1003, "departure_month": "June", "departure_time_num": 6, "class": "economy", "return_time_num": 5, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 200}, {"return_airport": "DFW", "airline": "AA", "departure_day": "12", "departure_airport": "DTW", "flight_number": 1004, "departure_month": "June", "departure_time_num": 9, "class": "economy", "return_time_num": 11, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 100}, {"return_airport": "IAD", "airline": "AA", "departure_day": "12", "departure_airport": "DFW", "flight_number": 1005, "departure_month": "June", "departure_time_num": 3, "class": "economy", "return_time_num": 17, "return_month": "June", "return_day": "13", "num_connections": 1, "price": 100}, {"return_airport": "DTW", "airline": "Frontier", "departure_day": "12", "departure_airport": "IAD", "flight_number": 1006, "departure_month": "June", "departure_time_num": 10, "class": "economy", "return_time_num": 10, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 100}, {"return_airport": "IAD", "airline": "UA", "departure_day": "12", "departure_airport": "DFW", "flight_number": 1007, "departure_month": "June", "departure_time_num": 14, "class": "economy", "return_time_num": 20, "return_month": "June", "return_day": "13", "num_connections": 1, "price": 100}, {"return_airport": "DFW", "airline": "AA", "departure_day": "13", "departure_airport": "DTW", "flight_number": 1008, "departure_month": "June", "departure_time_num": 6, "class": "economy", "return_time_num": 8, "return_month": "June", "return_day": "14", "num_connections": 2, "price": 400}, {"return_airport": "DFW", "airline": "Delta", "departure_day": "12", "departure_airport": "IAD", "flight_number": 1009, "departure_month": "June", "departure_time_num": 18, "class": "economy", "return_time_num": 6, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 200}, {"return_airport": "DFW", "airline": "Frontier", "departure_day": "13", "departure_airport": "DTW", "flight_number": 1010, "departure_month": "June", "departure_time_num": 4, "class": "economy", "return_time_num": 2, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 100}, {"return_airport": "DFW", "airline": "Southwest", "departure_day": "12", "departure_airport": "DTW", "flight_number": 1011, "departure_month": "June", "departure_time_num": 17, "class": "economy", "return_time_num": 22, "return_month": "June", "return_day": "13", "num_connections": 0, "price": 100}, {"return_airport": "DTW", "airline": "JetBlue", "departure_day": "11", "departure_airport": "DFW", "flight_number": 1012, "departure_month": "June", "departure_time_num": 13, "class": "economy", "return_time_num": 22, "return_month": "June", "return_day": "13", "num_connections": 1, "price": 100}, {"return_airport": "DTW", "airline": "Southwest", "departure_day": "12", "departure_airport": "IAD", "flight_number": 1013, "departure_month": "June", "departure_time_num": 16, "class": "economy", "return_time_num": 13, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 200}, {"return_airport": "DTW", "airline": "Delta", "departure_day": "12", "departure_airport": "IAD", "flight_number": 1014, "departure_month": "June", "departure_time_num": 0, "class": "economy", "return_time_num": 8, "return_month": "June", "return_day": "15", "num_connections": 1, "price": 100}, {"return_airport": "DTW", "airline": "Southwest", "departure_day": "12", "departure_airport": "DFW", "flight_number": 1015, "departure_month": "June", "departure_time_num": 17, "class": "economy", "return_time_num": 1, "return_month": "June", "return_day": "15", "num_connections": 1, "price": 300}, {"return_airport": "DTW", "airline": "UA", "departure_day": "11", "departure_airport": "DFW", "flight_number": 1016, "departure_month": "June", "departure_time_num": 10, "class": "economy", "return_time_num": 4, "return_month": "June", "return_day": "14", "num_connections": 0, "price": 200}, {"return_airport": "DFW", "airline": "AA", "departure_day": "12", "departure_airport": "DTW", "flight_number": 1017, "departure_month": "June", "departure_time_num": 14, "class": "economy", "return_time_num": 23, "return_month": "June", "return_day": "14", "num_connections": 2, "price": 400}, {"return_airport": "DTW", "airline": "JetBlue", "departure_day": "12", "departure_airport": "DFW", "flight_number": 1018, "departure_month": "June", "departure_time_num": 3, "class": "economy", "return_time_num": 1, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 100}, {"return_airport": "DFW", "airline": "Hawaiian", "departure_day": "12", "departure_airport": "IAD", "flight_number": 1019, "departure_month": "June", "departure_time_num": 7, "class": "economy", "return_time_num": 18, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 200}, {"return_airport": "DFW", "airline": "Delta", "departure_day": "12", "departure_airport": "IAD", "flight_number": 1020, "departure_month": "June", "departure_time_num": 6, "class": "economy", "return_time_num": 18, "return_month": "June", "return_day": "14", "num_connections": 2, "price": 200}, {"return_airport": "IAD", "airline": "Delta", "departure_day": "12", "departure_airport": "DFW", "flight_number": 1021, "departure_month": "June", "departure_time_num": 11, "class": "business", "return_time_num": 8, "return_month": "June", "return_day": "14", "num_connections": 0, "price": 1000}, {"return_airport": "IAD", "airline": "JetBlue", "departure_day": "12", "departure_airport": "DTW", "flight_number": 1022, "departure_month": "June", "departure_time_num": 4, "class": "economy", "return_time_num": 14, "return_month": "June", "return_day": "13", "num_connections": 0, "price": 200}, {"return_airport": "IAD", "airline": "Frontier", "departure_day": "12", "departure_airport": "DTW", "flight_number": 1023, "departure_month": "June", "departure_time_num": 19, "class": "economy", "return_time_num": 23, "return_month": "June", "return_day": "13", "num_connections": 1, "price": 200}, {"return_airport": "DFW", "airline": "UA", "departure_day": "12", "departure_airport": "DTW", "flight_number": 1024, "departure_month": "June", "departure_time_num": 11, "class": "economy", "return_time_num": 19, "return_month": "June", "return_day": "15", "num_connections": 1, "price": 200}, {"return_airport": "DTW", "airline": "Hawaiian", "departure_day": "11", "departure_airport": "IAD", "flight_number": 1025, "departure_month": "June", "departure_time_num": 6, "class": "economy", "return_time_num": 10, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 100}, {"return_airport": "DTW", "airline": "UA", "departure_day": "12", "departure_airport": "DFW", "flight_number": 1026, "departure_month": "June", "departure_time_num": 0, "class": "economy", "return_time_num": 18, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 300}, {"return_airport": "IAD", "airline": "Delta", "departure_day": "12", "departure_airport": "DFW", "flight_number": 1027, "departure_month": "June", "departure_time_num": 17, "class": "economy", "return_time_num": 15, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 200}, {"return_airport": "IAD", "airline": "Southwest", "departure_day": "12", "departure_airport": "DTW", "flight_number": 1028, "departure_month": "June", "departure_time_num": 23, "class": "economy", "return_time_num": 13, "return_month": "June", "return_day": "14", "num_connections": 1, "price": 100}, {"return_airport": "DFW", "airline": "Spirit", "departure_day": "11", "departure_airport": "DTW", "flight_number": 1029, "departure_month": "June", "departure_time_num": 22, "class": "business", "return_time_num": 4, "return_month": "June", "return_day": "14", "num_connections": 0, "price": 800}], "reservation": 0} ``` ### Data Fields BuilderConfig: `air_dialogue_data`: Provides for customer context, dialogue states and environment key name | Description | |---|---| |'search_action' | search action performed by customer | |'action' | Action taken by the agent | |'intent' | Intents from the conversation | |'timestamps' | Timestamp for each of the dialogues | |'dialogue' | Dialogue recorded between agent & customer | |'expected_action' | Expected action from agent (human-annotated)| |'correct_sample' | whether action performed by agent was same as expected_action | BuilderConfig: `air_dialogue_kb`: Provides for the Agent Context _ca_ = (_db_, _r_ ) key name | Description | |---|---| |'kb' | Available flights in the database | |'reservation' | whether customer has an existing reservation| ### Data Splits Data is split into Train/Dev & Test in the ration of 80%, 10% and 10% ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process To collect this dataset, we create a contextgenerator which provides travel and flight restrictions. We then ask human annotators to play the role of a customer or an agent and interact with the goal of successfully booking a trip given the restrictions. Key to our environment is the ease of evaluating the success of the dialogue, which is achieved by using ground-truth states (e.g., the flight being booked) generated by the restrictions. Any dialogue agent that does not generate the correct states is considered to fail. #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information No personal and sensitive information is stored ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [AirDialogue team](mailto:airdialogue@gmail.com) For issues regarding HuggingFace Dataset Hub implementation [Aakash Gupta](mailto:aakashg80@gmail.com) ### Licensing Information cc-by-nc-4.0 ### Citation Information @inproceedings{wei-etal-2018-airdialogue, title = "{A}ir{D}ialogue: An Environment for Goal-Oriented Dialogue Research", author = "Wei, Wei and Le, Quoc and Dai, Andrew and Li, Jia", booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", month = oct # "-" # nov, year = "2018", address = "Brussels, Belgium", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D18-1419", doi = "10.18653/v1/D18-1419", pages = "3844--3854", abstract = "Recent progress in dialogue generation has inspired a number of studies on dialogue systems that are capable of accomplishing tasks through natural language interactions. A promising direction among these studies is the use of reinforcement learning techniques, such as self-play, for training dialogue agents. However, current datasets are limited in size, and the environment for training agents and evaluating progress is relatively unsophisticated. We present AirDialogue, a large dataset that contains 301,427 goal-oriented conversations. To collect this dataset, we create a context-generator which provides travel and flight restrictions. We then ask human annotators to play the role of a customer or an agent and interact with the goal of successfully booking a trip given the restrictions. Key to our environment is the ease of evaluating the success of the dialogue, which is achieved by using ground-truth states (e.g., the flight being booked) generated by the restrictions. Any dialogue agent that does not generate the correct states is considered to fail. Our experimental results indicate that state-of-the-art dialogue models can only achieve a score of 0.17 while humans can reach a score of 0.91, which suggests significant opportunities for future improvement.", } ### Contributions Thanks to [@skyprince999](https://github.com/skyprince999) for adding this dataset.
air_dialogue
[ "task_categories:conversational", "task_categories:text-generation", "task_categories:fill-mask", "task_ids:dialogue-generation", "task_ids:dialogue-modeling", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:crowdsourced", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-nc-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["cc-by-nc-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["conversational", "text-generation", "fill-mask"], "task_ids": ["dialogue-generation", "dialogue-modeling", "language-modeling", "masked-language-modeling"], "pretty_name": "AirDialogue", "dataset_info": [{"config_name": "air_dialogue_data", "features": [{"name": "action", "struct": [{"name": "status", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "flight", "sequence": "int32"}]}, {"name": "intent", "struct": [{"name": "return_month", "dtype": "string"}, {"name": "return_day", "dtype": "string"}, {"name": "max_price", "dtype": "int32"}, {"name": "departure_airport", "dtype": "string"}, {"name": "max_connections", "dtype": "int32"}, {"name": "departure_day", "dtype": "string"}, {"name": "goal", "dtype": "string"}, {"name": "departure_month", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "return_airport", "dtype": "string"}]}, {"name": "timestamps", "sequence": "int64"}, {"name": "dialogue", "sequence": "string"}, {"name": "expected_action", "struct": [{"name": "status", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "flight", "sequence": "int32"}]}, {"name": "search_info", "list": [{"name": "button_name", "dtype": "string"}, {"name": "field_name", "dtype": "string"}, {"name": "field_value", "dtype": "string"}, {"name": "timestmamp", "dtype": "int64"}]}, {"name": "correct_sample", "dtype": "bool_"}], "splits": [{"name": "train", "num_bytes": 353721137, "num_examples": 321459}, {"name": "validation", "num_bytes": 44442238, "num_examples": 40363}], "download_size": 272898923, "dataset_size": 398163375}, {"config_name": "air_dialogue_kb", "features": [{"name": "kb", "list": [{"name": "airline", "dtype": "string"}, {"name": "class", "dtype": "string"}, {"name": "departure_airport", "dtype": "string"}, {"name": "departure_day", "dtype": "string"}, {"name": "departure_month", "dtype": "string"}, {"name": "departure_time_num", "dtype": "int32"}, {"name": "flight_number", "dtype": "int32"}, {"name": "num_connections", "dtype": "int32"}, {"name": "price", "dtype": "int32"}, {"name": "return_airport", "dtype": "string"}, {"name": "return_day", "dtype": "string"}, {"name": "return_month", "dtype": "string"}, {"name": "return_time_num", "dtype": "int32"}]}, {"name": "reservation", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 782592158, "num_examples": 321459}, {"name": "validation", "num_bytes": 98269789, "num_examples": 40363}], "download_size": 272898923, "dataset_size": 880861947}]}
2024-01-18T10:59:29+00:00
[]
[ "en" ]
TAGS #task_categories-conversational #task_categories-text-generation #task_categories-fill-mask #task_ids-dialogue-generation #task_ids-dialogue-modeling #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-crowdsourced #language_creators-machine-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-nc-4.0 #region-us
Dataset Card for air\_dialogue ============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: URL * Point of Contact: AirDialogue-Google Aakash Gupta ### Dataset Summary AirDialogue, is a large dataset that contains 402,038 goal-oriented conversations. To collect this dataset, we create a contextgenerator which provides travel and flight restrictions. Then the human annotators are asked to play the role of a customer or an agent and interact with the goal of successfully booking a trip given the restrictions. ### Supported Tasks and Leaderboards We use perplexity and BLEU score to evaluate the quality of the language generated by the model. We also compare the dialogue state generated by the model s and the ground truth state s0. Two categories of the metrics are used: exact match scores and scaled scores The inference competition & leaderboard can be found here: URL ### Languages The text in the dataset is in English. The BCP 47 code is 'en' Dataset Structure ----------------- ### Data Instances The data is provided in two set of files. The first one has the dialogues ('air\_dialogue\_data') and the knowledge-base ('air\_dialogue\_kb') BuilderConfig: 'air\_dialogue\_data' BuilderConfig: 'air\_dialogue\_kb' ### Data Fields BuilderConfig: 'air\_dialogue\_data': Provides for customer context, dialogue states and environment BuilderConfig: 'air\_dialogue\_kb': Provides for the Agent Context *ca* = (*db*, *r* ) ### Data Splits Data is split into Train/Dev & Test in the ration of 80%, 10% and 10% Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process To collect this dataset, we create a contextgenerator which provides travel and flight restrictions. We then ask human annotators to play the role of a customer or an agent and interact with the goal of successfully booking a trip given the restrictions. Key to our environment is the ease of evaluating the success of the dialogue, which is achieved by using ground-truth states (e.g., the flight being booked) generated by the restrictions. Any dialogue agent that does not generate the correct states is considered to fail. #### Who are the annotators? ### Personal and Sensitive Information No personal and sensitive information is stored Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators AirDialogue team For issues regarding HuggingFace Dataset Hub implementation Aakash Gupta ### Licensing Information cc-by-nc-4.0 @inproceedings{wei-etal-2018-airdialogue, title = "{A}ir{D}ialogue: An Environment for Goal-Oriented Dialogue Research", author = "Wei, Wei and Le, Quoc and Dai, Andrew and Li, Jia", booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", month = oct # "-" # nov, year = "2018", address = "Brussels, Belgium", publisher = "Association for Computational Linguistics", url = "URL doi = "10.18653/v1/D18-1419", pages = "3844--3854", abstract = "Recent progress in dialogue generation has inspired a number of studies on dialogue systems that are capable of accomplishing tasks through natural language interactions. A promising direction among these studies is the use of reinforcement learning techniques, such as self-play, for training dialogue agents. However, current datasets are limited in size, and the environment for training agents and evaluating progress is relatively unsophisticated. We present AirDialogue, a large dataset that contains 301,427 goal-oriented conversations. To collect this dataset, we create a context-generator which provides travel and flight restrictions. We then ask human annotators to play the role of a customer or an agent and interact with the goal of successfully booking a trip given the restrictions. Key to our environment is the ease of evaluating the success of the dialogue, which is achieved by using ground-truth states (e.g., the flight being booked) generated by the restrictions. Any dialogue agent that does not generate the correct states is considered to fail. Our experimental results indicate that state-of-the-art dialogue models can only achieve a score of 0.17 while humans can reach a score of 0.91, which suggests significant opportunities for future improvement.", } ### Contributions Thanks to @skyprince999 for adding this dataset.
[ "### Dataset Summary\n\n\nAirDialogue, is a large dataset that contains 402,038 goal-oriented conversations. To collect this dataset, we create a contextgenerator which provides travel and flight restrictions. Then the human annotators are asked to play the role of a customer or an agent and interact with the goal of successfully booking a trip given the restrictions.", "### Supported Tasks and Leaderboards\n\n\nWe use perplexity and BLEU score to evaluate the quality of the language generated by the model. We also compare the dialogue state generated by the model s and the ground truth state s0. Two categories of the metrics are used: exact match scores and scaled scores\n\n\nThe inference competition & leaderboard can be found here:\nURL", "### Languages\n\n\nThe text in the dataset is in English. The BCP 47 code is 'en'\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThe data is provided in two set of files. The first one has the dialogues ('air\\_dialogue\\_data') and the knowledge-base ('air\\_dialogue\\_kb')\n\n\nBuilderConfig: 'air\\_dialogue\\_data'\n\n\nBuilderConfig: 'air\\_dialogue\\_kb'", "### Data Fields\n\n\nBuilderConfig: 'air\\_dialogue\\_data':\nProvides for customer context, dialogue states and environment\n\n\n\nBuilderConfig: 'air\\_dialogue\\_kb':\nProvides for the Agent Context *ca* = (*db*, *r* )", "### Data Splits\n\n\nData is split into Train/Dev & Test in the ration of 80%, 10% and 10%\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\n\nTo collect this dataset, we create a contextgenerator which provides travel and flight restrictions. We then ask human annotators to play the role of a customer or an agent and interact with the goal of successfully booking a trip given the restrictions. Key to our environment is the ease of evaluating the success of the dialogue, which is achieved by using ground-truth states (e.g., the flight being booked) generated by the restrictions. Any dialogue agent that does not generate the correct states is considered to fail.", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nNo personal and sensitive information is stored\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nAirDialogue team\n\n\nFor issues regarding HuggingFace Dataset Hub implementation Aakash Gupta", "### Licensing Information\n\n\ncc-by-nc-4.0\n\n\n@inproceedings{wei-etal-2018-airdialogue,\ntitle = \"{A}ir{D}ialogue: An Environment for Goal-Oriented Dialogue Research\",\nauthor = \"Wei, Wei and\nLe, Quoc and\nDai, Andrew and\nLi, Jia\",\nbooktitle = \"Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing\",\nmonth = oct # \"-\" # nov,\nyear = \"2018\",\naddress = \"Brussels, Belgium\",\npublisher = \"Association for Computational Linguistics\",\nurl = \"URL\ndoi = \"10.18653/v1/D18-1419\",\npages = \"3844--3854\",\nabstract = \"Recent progress in dialogue generation has inspired a number of studies on dialogue systems that are capable of accomplishing tasks through natural language interactions. A promising direction among these studies is the use of reinforcement learning techniques, such as self-play, for training dialogue agents. However, current datasets are limited in size, and the environment for training agents and evaluating progress is relatively unsophisticated. We present AirDialogue, a large dataset that contains 301,427 goal-oriented conversations. To collect this dataset, we create a context-generator which provides travel and flight restrictions. We then ask human annotators to play the role of a customer or an agent and interact with the goal of successfully booking a trip given the restrictions. Key to our environment is the ease of evaluating the success of the dialogue, which is achieved by using ground-truth states (e.g., the flight being booked) generated by the restrictions. Any dialogue agent that does not generate the correct states is considered to fail. Our experimental results indicate that state-of-the-art dialogue models can only achieve a score of 0.17 while humans can reach a score of 0.91, which suggests significant opportunities for future improvement.\",\n}", "### Contributions\n\n\nThanks to @skyprince999 for adding this dataset." ]
[ "TAGS\n#task_categories-conversational #task_categories-text-generation #task_categories-fill-mask #task_ids-dialogue-generation #task_ids-dialogue-modeling #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-crowdsourced #language_creators-machine-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-nc-4.0 #region-us \n", "### Dataset Summary\n\n\nAirDialogue, is a large dataset that contains 402,038 goal-oriented conversations. To collect this dataset, we create a contextgenerator which provides travel and flight restrictions. Then the human annotators are asked to play the role of a customer or an agent and interact with the goal of successfully booking a trip given the restrictions.", "### Supported Tasks and Leaderboards\n\n\nWe use perplexity and BLEU score to evaluate the quality of the language generated by the model. We also compare the dialogue state generated by the model s and the ground truth state s0. Two categories of the metrics are used: exact match scores and scaled scores\n\n\nThe inference competition & leaderboard can be found here:\nURL", "### Languages\n\n\nThe text in the dataset is in English. The BCP 47 code is 'en'\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThe data is provided in two set of files. The first one has the dialogues ('air\\_dialogue\\_data') and the knowledge-base ('air\\_dialogue\\_kb')\n\n\nBuilderConfig: 'air\\_dialogue\\_data'\n\n\nBuilderConfig: 'air\\_dialogue\\_kb'", "### Data Fields\n\n\nBuilderConfig: 'air\\_dialogue\\_data':\nProvides for customer context, dialogue states and environment\n\n\n\nBuilderConfig: 'air\\_dialogue\\_kb':\nProvides for the Agent Context *ca* = (*db*, *r* )", "### Data Splits\n\n\nData is split into Train/Dev & Test in the ration of 80%, 10% and 10%\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\n\nTo collect this dataset, we create a contextgenerator which provides travel and flight restrictions. We then ask human annotators to play the role of a customer or an agent and interact with the goal of successfully booking a trip given the restrictions. Key to our environment is the ease of evaluating the success of the dialogue, which is achieved by using ground-truth states (e.g., the flight being booked) generated by the restrictions. Any dialogue agent that does not generate the correct states is considered to fail.", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nNo personal and sensitive information is stored\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nAirDialogue team\n\n\nFor issues regarding HuggingFace Dataset Hub implementation Aakash Gupta", "### Licensing Information\n\n\ncc-by-nc-4.0\n\n\n@inproceedings{wei-etal-2018-airdialogue,\ntitle = \"{A}ir{D}ialogue: An Environment for Goal-Oriented Dialogue Research\",\nauthor = \"Wei, Wei and\nLe, Quoc and\nDai, Andrew and\nLi, Jia\",\nbooktitle = \"Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing\",\nmonth = oct # \"-\" # nov,\nyear = \"2018\",\naddress = \"Brussels, Belgium\",\npublisher = \"Association for Computational Linguistics\",\nurl = \"URL\ndoi = \"10.18653/v1/D18-1419\",\npages = \"3844--3854\",\nabstract = \"Recent progress in dialogue generation has inspired a number of studies on dialogue systems that are capable of accomplishing tasks through natural language interactions. A promising direction among these studies is the use of reinforcement learning techniques, such as self-play, for training dialogue agents. However, current datasets are limited in size, and the environment for training agents and evaluating progress is relatively unsophisticated. We present AirDialogue, a large dataset that contains 301,427 goal-oriented conversations. To collect this dataset, we create a context-generator which provides travel and flight restrictions. We then ask human annotators to play the role of a customer or an agent and interact with the goal of successfully booking a trip given the restrictions. Key to our environment is the ease of evaluating the success of the dialogue, which is achieved by using ground-truth states (e.g., the flight being booked) generated by the restrictions. Any dialogue agent that does not generate the correct states is considered to fail. Our experimental results indicate that state-of-the-art dialogue models can only achieve a score of 0.17 while humans can reach a score of 0.91, which suggests significant opportunities for future improvement.\",\n}", "### Contributions\n\n\nThanks to @skyprince999 for adding this dataset." ]
[ 152, 83, 85, 30, 90, 71, 30, 7, 4, 10, 10, 5, 120, 9, 26, 7, 8, 14, 26, 449, 18 ]
[ "passage: TAGS\n#task_categories-conversational #task_categories-text-generation #task_categories-fill-mask #task_ids-dialogue-generation #task_ids-dialogue-modeling #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-crowdsourced #language_creators-machine-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-nc-4.0 #region-us \n### Dataset Summary\n\n\nAirDialogue, is a large dataset that contains 402,038 goal-oriented conversations. To collect this dataset, we create a contextgenerator which provides travel and flight restrictions. Then the human annotators are asked to play the role of a customer or an agent and interact with the goal of successfully booking a trip given the restrictions.### Supported Tasks and Leaderboards\n\n\nWe use perplexity and BLEU score to evaluate the quality of the language generated by the model. We also compare the dialogue state generated by the model s and the ground truth state s0. Two categories of the metrics are used: exact match scores and scaled scores\n\n\nThe inference competition & leaderboard can be found here:\nURL### Languages\n\n\nThe text in the dataset is in English. The BCP 47 code is 'en'\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nThe data is provided in two set of files. The first one has the dialogues ('air\\_dialogue\\_data') and the knowledge-base ('air\\_dialogue\\_kb')\n\n\nBuilderConfig: 'air\\_dialogue\\_data'\n\n\nBuilderConfig: 'air\\_dialogue\\_kb'", "passage: ### Data Fields\n\n\nBuilderConfig: 'air\\_dialogue\\_data':\nProvides for customer context, dialogue states and environment\n\n\n\nBuilderConfig: 'air\\_dialogue\\_kb':\nProvides for the Agent Context *ca* = (*db*, *r* )### Data Splits\n\n\nData is split into Train/Dev & Test in the ration of 80%, 10% and 10%\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process\n\n\nTo collect this dataset, we create a contextgenerator which provides travel and flight restrictions. We then ask human annotators to play the role of a customer or an agent and interact with the goal of successfully booking a trip given the restrictions. Key to our environment is the ease of evaluating the success of the dialogue, which is achieved by using ground-truth states (e.g., the flight being booked) generated by the restrictions. Any dialogue agent that does not generate the correct states is considered to fail.#### Who are the annotators?### Personal and Sensitive Information\n\n\nNo personal and sensitive information is stored\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators\n\n\nAirDialogue team\n\n\nFor issues regarding HuggingFace Dataset Hub implementation Aakash Gupta" ]
af3f2fa5462ac461b696cb300d66e07ad366057f
# Dataset Card for Arabic Jordanian General Tweets ## Table of Contents - [Dataset Card for Arabic Jordanian General Tweets](#dataset-card-for-arabic-jordanian-general-tweets) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [|split|num examples|](#splitnum-examples) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [Arabic Jordanian General Tweets](https://github.com/komari6/Arabic-twitter-corpus-AJGT) - **Paper:** [Arabic Tweets Sentimental Analysis Using Machine Learning](https://link.springer.com/chapter/10.1007/978-3-319-60042-0_66) - **Point of Contact:** [Khaled Alomari](khaled.alomari@adu.ac.ae) ### Dataset Summary Arabic Jordanian General Tweets (AJGT) Corpus consisted of 1,800 tweets annotated as positive and negative. Modern Standard Arabic (MSA) or Jordanian dialect. ### Supported Tasks and Leaderboards The dataset was published on this [paper](https://link.springer.com/chapter/10.1007/978-3-319-60042-0_66). ### Languages The dataset is based on Arabic. ## Dataset Structure ### Data Instances A binary datset with with negative and positive sentiments. ### Data Fields - `text` (str): Tweet text. - `label` (int): Sentiment. ### Data Splits The dataset is not split. | | train | |----------|------:| | no split | 1,800 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization Contains 1,800 tweets collected from twitter. #### Who are the source language producers? From tweeter. ### Annotations The dataset does not contain any additional annotations. #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{alomari2017arabic, title={Arabic tweets sentimental analysis using machine learning}, author={Alomari, Khaled Mohammad and ElSherif, Hatem M and Shaalan, Khaled}, booktitle={International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems}, pages={602--610}, year={2017}, organization={Springer} } ``` ### Contributions Thanks to [@zaidalyafeai](https://github.com/zaidalyafeai), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
ajgt_twitter_ar
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:ar", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["ar"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "Arabic Jordanian General Tweets", "dataset_info": {"config_name": "plain_text", "features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Negative", "1": "Positive"}}}}], "splits": [{"name": "train", "num_bytes": 175420, "num_examples": 1800}], "download_size": 91857, "dataset_size": 175420}, "configs": [{"config_name": "plain_text", "data_files": [{"split": "train", "path": "plain_text/train-*"}], "default": true}]}
2024-01-09T11:58:01+00:00
[]
[ "ar" ]
TAGS #task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Arabic #license-unknown #region-us
Dataset Card for Arabic Jordanian General Tweets ================================================ Table of Contents ----------------- * Dataset Card for Arabic Jordanian General Tweets + Table of Contents + Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages + Dataset Structure - Data Instances - Data Fields - Data Splits + |split|num examples| + Dataset Creation - Curation Rationale - Source Data * Initial Data Collection and Normalization * Who are the source language producers? - Annotations * Annotation process * Who are the annotators? - Personal and Sensitive Information + Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations + Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions Dataset Description ------------------- * Repository: Arabic Jordanian General Tweets * Paper: Arabic Tweets Sentimental Analysis Using Machine Learning * Point of Contact: Khaled Alomari ### Dataset Summary Arabic Jordanian General Tweets (AJGT) Corpus consisted of 1,800 tweets annotated as positive and negative. Modern Standard Arabic (MSA) or Jordanian dialect. ### Supported Tasks and Leaderboards The dataset was published on this paper. ### Languages The dataset is based on Arabic. Dataset Structure ----------------- ### Data Instances A binary datset with with negative and positive sentiments. ### Data Fields * 'text' (str): Tweet text. * 'label' (int): Sentiment. ### Data Splits The dataset is not split. Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization Contains 1,800 tweets collected from twitter. #### Who are the source language producers? From tweeter. ### Annotations The dataset does not contain any additional annotations. #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @zaidalyafeai, @lhoestq for adding this dataset.
[ "### Dataset Summary\n\n\nArabic Jordanian General Tweets (AJGT) Corpus consisted of 1,800 tweets annotated as positive and negative. Modern Standard Arabic (MSA) or Jordanian dialect.", "### Supported Tasks and Leaderboards\n\n\nThe dataset was published on this paper.", "### Languages\n\n\nThe dataset is based on Arabic.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA binary datset with with negative and positive sentiments.", "### Data Fields\n\n\n* 'text' (str): Tweet text.\n* 'label' (int): Sentiment.", "### Data Splits\n\n\nThe dataset is not split.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nContains 1,800 tweets collected from twitter.", "#### Who are the source language producers?\n\n\nFrom tweeter.", "### Annotations\n\n\nThe dataset does not contain any additional annotations.", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @zaidalyafeai, @lhoestq for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Arabic #license-unknown #region-us \n", "### Dataset Summary\n\n\nArabic Jordanian General Tweets (AJGT) Corpus consisted of 1,800 tweets annotated as positive and negative. Modern Standard Arabic (MSA) or Jordanian dialect.", "### Supported Tasks and Leaderboards\n\n\nThe dataset was published on this paper.", "### Languages\n\n\nThe dataset is based on Arabic.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA binary datset with with negative and positive sentiments.", "### Data Fields\n\n\n* 'text' (str): Tweet text.\n* 'label' (int): Sentiment.", "### Data Splits\n\n\nThe dataset is not split.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nContains 1,800 tweets collected from twitter.", "#### Who are the source language producers?\n\n\nFrom tweeter.", "### Annotations\n\n\nThe dataset does not contain any additional annotations.", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @zaidalyafeai, @lhoestq for adding this dataset." ]
[ 86, 45, 19, 19, 19, 25, 18, 7, 4, 22, 14, 17, 5, 9, 18, 7, 8, 14, 6, 6, 25 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Arabic #license-unknown #region-us \n### Dataset Summary\n\n\nArabic Jordanian General Tweets (AJGT) Corpus consisted of 1,800 tweets annotated as positive and negative. Modern Standard Arabic (MSA) or Jordanian dialect.### Supported Tasks and Leaderboards\n\n\nThe dataset was published on this paper.### Languages\n\n\nThe dataset is based on Arabic.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nA binary datset with with negative and positive sentiments.### Data Fields\n\n\n* 'text' (str): Tweet text.\n* 'label' (int): Sentiment.### Data Splits\n\n\nThe dataset is not split.\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization\n\n\nContains 1,800 tweets collected from twitter.#### Who are the source language producers?\n\n\nFrom tweeter.### Annotations\n\n\nThe dataset does not contain any additional annotations.#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators### Licensing Information### Contributions\n\n\nThanks to @zaidalyafeai, @lhoestq for adding this dataset." ]
71593d1379934286885c53d147bc863ffe830745
# Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://klejbenchmark.com/ - **Repository:** https://github.com/allegro/klejbenchmark-allegroreviews - **Paper:** KLEJ: Comprehensive Benchmark for Polish Language Understanding (Rybak, Piotr and Mroczkowski, Robert and Tracz, Janusz and Gawlik, Ireneusz) - **Leaderboard:** https://klejbenchmark.com/leaderboard/ - **Point of Contact:** klejbenchmark@allegro.pl ### Dataset Summary Allegro Reviews is a sentiment analysis dataset, consisting of 11,588 product reviews written in Polish and extracted from Allegro.pl - a popular e-commerce marketplace. Each review contains at least 50 words and has a rating on a scale from one (negative review) to five (positive review). We recommend using the provided train/dev/test split. The ratings for the test set reviews are kept hidden. You can evaluate your model using the online evaluation tool available on klejbenchmark.com. ### Supported Tasks and Leaderboards Product reviews sentiment analysis. https://klejbenchmark.com/leaderboard/ ### Languages Polish ## Dataset Structure ### Data Instances Two tsv files (train, dev) with two columns (text, rating) and one (test) with just one (text). ### Data Fields - text: a product review of at least 50 words - rating: product rating of a scale of one (negative review) to five (positive review) ### Data Splits Data is splitted in train/dev/test split. ## Dataset Creation ### Curation Rationale This dataset is one of nine evaluation tasks to improve polish language processing. ### Source Data #### Initial Data Collection and Normalization The Allegro Reviews is a set of product reviews from a popular e-commerce marketplace (Allegro.pl). #### Who are the source language producers? Customers of an e-commerce marketplace. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Allegro Machine Learning Research team klejbenchmark@allegro.pl ### Licensing Information Dataset licensed under CC BY-SA 4.0 ### Citation Information @inproceedings{rybak-etal-2020-klej, title = "{KLEJ}: Comprehensive Benchmark for Polish Language Understanding", author = "Rybak, Piotr and Mroczkowski, Robert and Tracz, Janusz and Gawlik, Ireneusz", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.acl-main.111", pages = "1191--1201", } ### Contributions Thanks to [@abecadel](https://github.com/abecadel) for adding this dataset.
allegro_reviews
[ "task_categories:text-classification", "task_ids:sentiment-scoring", "task_ids:text-scoring", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:pl", "license:cc-by-sa-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["pl"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-scoring", "text-scoring"], "paperswithcode_id": "allegro-reviews", "pretty_name": "Allegro Reviews", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "rating", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 4899535, "num_examples": 9577}, {"name": "test", "num_bytes": 514523, "num_examples": 1006}, {"name": "validation", "num_bytes": 515781, "num_examples": 1002}], "download_size": 3923657, "dataset_size": 5929839}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}]}
2024-01-09T11:59:39+00:00
[]
[ "pl" ]
TAGS #task_categories-text-classification #task_ids-sentiment-scoring #task_ids-text-scoring #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Polish #license-cc-by-sa-4.0 #region-us
# Dataset Card for [Dataset Name] ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: KLEJ: Comprehensive Benchmark for Polish Language Understanding (Rybak, Piotr and Mroczkowski, Robert and Tracz, Janusz and Gawlik, Ireneusz) - Leaderboard: URL - Point of Contact: klejbenchmark@URL ### Dataset Summary Allegro Reviews is a sentiment analysis dataset, consisting of 11,588 product reviews written in Polish and extracted from URL - a popular e-commerce marketplace. Each review contains at least 50 words and has a rating on a scale from one (negative review) to five (positive review). We recommend using the provided train/dev/test split. The ratings for the test set reviews are kept hidden. You can evaluate your model using the online evaluation tool available on URL. ### Supported Tasks and Leaderboards Product reviews sentiment analysis. URL ### Languages Polish ## Dataset Structure ### Data Instances Two tsv files (train, dev) with two columns (text, rating) and one (test) with just one (text). ### Data Fields - text: a product review of at least 50 words - rating: product rating of a scale of one (negative review) to five (positive review) ### Data Splits Data is splitted in train/dev/test split. ## Dataset Creation ### Curation Rationale This dataset is one of nine evaluation tasks to improve polish language processing. ### Source Data #### Initial Data Collection and Normalization The Allegro Reviews is a set of product reviews from a popular e-commerce marketplace (URL). #### Who are the source language producers? Customers of an e-commerce marketplace. ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators Allegro Machine Learning Research team klejbenchmark@URL ### Licensing Information Dataset licensed under CC BY-SA 4.0 @inproceedings{rybak-etal-2020-klej, title = "{KLEJ}: Comprehensive Benchmark for Polish Language Understanding", author = "Rybak, Piotr and Mroczkowski, Robert and Tracz, Janusz and Gawlik, Ireneusz", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "URL pages = "1191--1201", } ### Contributions Thanks to @abecadel for adding this dataset.
[ "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\nURL\n- Repository:\nURL\n- Paper:\nKLEJ: Comprehensive Benchmark for Polish Language Understanding (Rybak, Piotr and Mroczkowski, Robert and Tracz, Janusz and Gawlik, Ireneusz)\n- Leaderboard:\nURL\n- Point of Contact:\nklejbenchmark@URL", "### Dataset Summary\n\nAllegro Reviews is a sentiment analysis dataset, consisting of 11,588 product reviews written in Polish and extracted from URL - a popular e-commerce marketplace. Each review contains at least 50 words and has a rating on a scale from one (negative review) to five (positive review).\n\nWe recommend using the provided train/dev/test split. The ratings for the test set reviews are kept hidden. You can evaluate your model using the online evaluation tool available on URL.", "### Supported Tasks and Leaderboards\n\nProduct reviews sentiment analysis.\nURL", "### Languages\n\nPolish", "## Dataset Structure", "### Data Instances\n\nTwo tsv files (train, dev) with two columns (text, rating) and one (test) with just one (text).", "### Data Fields\n\n- text: a product review of at least 50 words\n- rating: product rating of a scale of one (negative review) to five (positive review)", "### Data Splits\n\nData is splitted in train/dev/test split.", "## Dataset Creation", "### Curation Rationale\n\nThis dataset is one of nine evaluation tasks to improve polish language processing.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe Allegro Reviews is a set of product reviews from a popular e-commerce marketplace (URL).", "#### Who are the source language producers?\n\nCustomers of an e-commerce marketplace.", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nAllegro Machine Learning Research team klejbenchmark@URL", "### Licensing Information\n\nDataset licensed under CC BY-SA 4.0\n\n\n\n@inproceedings{rybak-etal-2020-klej,\n title = \"{KLEJ}: Comprehensive Benchmark for Polish Language Understanding\",\n author = \"Rybak, Piotr and Mroczkowski, Robert and Tracz, Janusz and Gawlik, Ireneusz\",\n booktitle = \"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics\",\n month = jul,\n year = \"2020\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"URL\n pages = \"1191--1201\",\n}", "### Contributions\n\nThanks to @abecadel for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-sentiment-scoring #task_ids-text-scoring #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Polish #license-cc-by-sa-4.0 #region-us \n", "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\nURL\n- Repository:\nURL\n- Paper:\nKLEJ: Comprehensive Benchmark for Polish Language Understanding (Rybak, Piotr and Mroczkowski, Robert and Tracz, Janusz and Gawlik, Ireneusz)\n- Leaderboard:\nURL\n- Point of Contact:\nklejbenchmark@URL", "### Dataset Summary\n\nAllegro Reviews is a sentiment analysis dataset, consisting of 11,588 product reviews written in Polish and extracted from URL - a popular e-commerce marketplace. Each review contains at least 50 words and has a rating on a scale from one (negative review) to five (positive review).\n\nWe recommend using the provided train/dev/test split. The ratings for the test set reviews are kept hidden. You can evaluate your model using the online evaluation tool available on URL.", "### Supported Tasks and Leaderboards\n\nProduct reviews sentiment analysis.\nURL", "### Languages\n\nPolish", "## Dataset Structure", "### Data Instances\n\nTwo tsv files (train, dev) with two columns (text, rating) and one (test) with just one (text).", "### Data Fields\n\n- text: a product review of at least 50 words\n- rating: product rating of a scale of one (negative review) to five (positive review)", "### Data Splits\n\nData is splitted in train/dev/test split.", "## Dataset Creation", "### Curation Rationale\n\nThis dataset is one of nine evaluation tasks to improve polish language processing.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe Allegro Reviews is a set of product reviews from a popular e-commerce marketplace (URL).", "#### Who are the source language producers?\n\nCustomers of an e-commerce marketplace.", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nAllegro Machine Learning Research team klejbenchmark@URL", "### Licensing Information\n\nDataset licensed under CC BY-SA 4.0\n\n\n\n@inproceedings{rybak-etal-2020-klej,\n title = \"{KLEJ}: Comprehensive Benchmark for Polish Language Understanding\",\n author = \"Rybak, Piotr and Mroczkowski, Robert and Tracz, Janusz and Gawlik, Ireneusz\",\n booktitle = \"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics\",\n month = jul,\n year = \"2020\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"URL\n pages = \"1191--1201\",\n}", "### Contributions\n\nThanks to @abecadel for adding this dataset." ]
[ 100, 10, 120, 75, 110, 16, 6, 6, 37, 38, 17, 5, 26, 4, 31, 19, 5, 5, 9, 8, 8, 7, 8, 7, 5, 19, 150, 17 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-sentiment-scoring #task_ids-text-scoring #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Polish #license-cc-by-sa-4.0 #region-us \n# Dataset Card for [Dataset Name]## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage:\nURL\n- Repository:\nURL\n- Paper:\nKLEJ: Comprehensive Benchmark for Polish Language Understanding (Rybak, Piotr and Mroczkowski, Robert and Tracz, Janusz and Gawlik, Ireneusz)\n- Leaderboard:\nURL\n- Point of Contact:\nklejbenchmark@URL### Dataset Summary\n\nAllegro Reviews is a sentiment analysis dataset, consisting of 11,588 product reviews written in Polish and extracted from URL - a popular e-commerce marketplace. Each review contains at least 50 words and has a rating on a scale from one (negative review) to five (positive review).\n\nWe recommend using the provided train/dev/test split. The ratings for the test set reviews are kept hidden. You can evaluate your model using the online evaluation tool available on URL.### Supported Tasks and Leaderboards\n\nProduct reviews sentiment analysis.\nURL### Languages\n\nPolish## Dataset Structure### Data Instances\n\nTwo tsv files (train, dev) with two columns (text, rating) and one (test) with just one (text)." ]
a4654f4896408912913a62ace89614879a549287
# Dataset Card for Allociné ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** [Allociné dataset repository](https://github.com/TheophileBlard/french-sentiment-analysis-with-bert/tree/master/allocine_dataset) - **Paper:** - **Leaderboard:** - **Point of Contact:** [Théophile Blard](mailto:theophile.blard@gmail.com) ### Dataset Summary The Allociné dataset is a French-language dataset for sentiment analysis. The texts are movie reviews written between 2006 and 2020 by members of the [Allociné.fr](https://www.allocine.fr/) community for various films. It contains 100k positive and 100k negative reviews divided into train (160k), validation (20k), and test (20k). ### Supported Tasks and Leaderboards - `text-classification`, `sentiment-classification`: The dataset can be used to train a model for sentiment classification. The model performance is evaluated based on the accuracy of the predicted labels as compared to the given labels in the dataset. A BERT-based model, [tf-allociné](https://huggingface.co/tblard/tf-allocine), achieves 97.44% accuracy on the test set. ### Languages The text is in French, as spoken by users of the [Allociné.fr](https://www.allocine.fr/) website. The BCP-47 code for French is fr. ## Dataset Structure ### Data Instances Each data instance contains the following features: _review_ and _label_. In the Hugging Face distribution of the dataset, the _label_ has 2 possible values, _0_ and _1_, which correspond to _negative_ and _positive_ respectively. See the [Allociné corpus viewer](https://huggingface.co/datasets/viewer/?dataset=allocine) to explore more examples. An example from the Allociné train set looks like the following: ``` {'review': 'Premier film de la saga Kozure Okami, "Le Sabre de la vengeance" est un très bon film qui mêle drame et action, et qui, en 40 ans, n'a pas pris une ride.', 'label': 1} ``` ### Data Fields - 'review': a string containing the review text - 'label': an integer, either _0_ or _1_, indicating a _negative_ or _positive_ review, respectively ### Data Splits The Allociné dataset has 3 splits: _train_, _validation_, and _test_. The splits contain disjoint sets of movies. The following table contains the number of reviews in each split and the percentage of positive and negative reviews. | Dataset Split | Number of Instances in Split | Percent Negative Reviews | Percent Positive Reviews | | ------------- | ---------------------------- | ------------------------ | ------------------------ | | Train | 160,000 | 49.6% | 50.4% | | Validation | 20,000 | 51.0% | 49.0% | | Test | 20,000 | 52.0% | 48.0% | ## Dataset Creation ### Curation Rationale The Allociné dataset was developed to support large-scale sentiment analysis in French. It was released alongside the [tf-allociné](https://huggingface.co/tblard/tf-allocine) model and used to compare the performance of several language models on this task. ### Source Data #### Initial Data Collection and Normalization The reviews and ratings were collected using a list of [film page urls](https://github.com/TheophileBlard/french-sentiment-analysis-with-bert/blob/master/allocine_dataset/allocine_films_urls.txt) and the [allocine_scraper.py](https://github.com/TheophileBlard/french-sentiment-analysis-with-bert/blob/master/allocine_dataset/allocine_scraper.py) tool. Up to 30 reviews were collected for each film. The reviews were originally labeled with a rating from 0.5 to 5.0 with a step of 0.5 between each rating. Ratings less than or equal to 2 are labeled as negative and ratings greater than or equal to 4 are labeled as positive. Only reviews with less than 2000 characters are included in the dataset. #### Who are the source language producers? The dataset contains movie reviews produced by the online community of the [Allociné.fr](https://www.allocine.fr/) website. ### Annotations The dataset does not contain any additional annotations. #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information Reviewer usernames or personal information were not collected with the reviews, but could potentially be recovered. The content of each review may include information and opinions about the film's actors, film crew, and plot. ## Considerations for Using the Data ### Social Impact of Dataset Sentiment classification is a complex task which requires sophisticated language understanding skills. Successful models can support decision-making based on the outcome of the sentiment analysis, though such models currently require a high degree of domain specificity. It should be noted that the community represented in the dataset may not represent any downstream application's potential users, and the observed behavior of a model trained on this dataset may vary based on the domain and use case. ### Discussion of Biases The Allociné website lists a number of topics which violate their [terms of service](https://www.allocine.fr/service/conditions.html#charte). Further analysis is needed to determine the extent to which moderators have successfully removed such content. ### Other Known Limitations The limitations of the Allociné dataset have not yet been investigated, however [Staliūnaitė and Bonfil (2017)](https://www.aclweb.org/anthology/W17-5410.pdf) detail linguistic phenomena that are generally present in sentiment analysis but difficult for models to accurately label, such as negation, adverbial modifiers, and reviewer pragmatics. ## Additional Information ### Dataset Curators The Allociné dataset was collected by Théophile Blard. ### Licensing Information The Allociné dataset is licensed under the [MIT License](https://opensource.org/licenses/MIT). ### Citation Information > Théophile Blard, French sentiment analysis with BERT, (2020), GitHub repository, <https://github.com/TheophileBlard/french-sentiment-analysis-with-bert> ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@TheophileBlard](https://github.com/TheophileBlard), [@lewtun](https://github.com/lewtun) and [@mcmillanmajora](https://github.com/mcmillanmajora) for adding this dataset.
allocine
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:fr", "license:mit", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["fr"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "paperswithcode_id": "allocine", "pretty_name": "Allocin\u00e9", "dataset_info": {"config_name": "allocine", "features": [{"name": "review", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "neg", "1": "pos"}}}}], "splits": [{"name": "train", "num_bytes": 91330632, "num_examples": 160000}, {"name": "validation", "num_bytes": 11546242, "num_examples": 20000}, {"name": "test", "num_bytes": 11547689, "num_examples": 20000}], "download_size": 75125954, "dataset_size": 114424563}, "configs": [{"config_name": "allocine", "data_files": [{"split": "train", "path": "allocine/train-*"}, {"split": "validation", "path": "allocine/validation-*"}, {"split": "test", "path": "allocine/test-*"}], "default": true}], "train-eval-index": [{"config": "allocine", "task": "text-classification", "task_id": "multi_class_classification", "splits": {"train_split": "train", "eval_split": "test"}, "col_mapping": {"review": "text", "label": "target"}, "metrics": [{"type": "accuracy", "name": "Accuracy"}, {"type": "f1", "name": "F1 macro", "args": {"average": "macro"}}, {"type": "f1", "name": "F1 micro", "args": {"average": "micro"}}, {"type": "f1", "name": "F1 weighted", "args": {"average": "weighted"}}, {"type": "precision", "name": "Precision macro", "args": {"average": "macro"}}, {"type": "precision", "name": "Precision micro", "args": {"average": "micro"}}, {"type": "precision", "name": "Precision weighted", "args": {"average": "weighted"}}, {"type": "recall", "name": "Recall macro", "args": {"average": "macro"}}, {"type": "recall", "name": "Recall micro", "args": {"average": "micro"}}, {"type": "recall", "name": "Recall weighted", "args": {"average": "weighted"}}]}]}
2024-01-09T12:02:24+00:00
[]
[ "fr" ]
TAGS #task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-French #license-mit #region-us
Dataset Card for Allociné ========================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: * Repository: Allociné dataset repository * Paper: * Leaderboard: * Point of Contact: Théophile Blard ### Dataset Summary The Allociné dataset is a French-language dataset for sentiment analysis. The texts are movie reviews written between 2006 and 2020 by members of the Allociné.fr community for various films. It contains 100k positive and 100k negative reviews divided into train (160k), validation (20k), and test (20k). ### Supported Tasks and Leaderboards * 'text-classification', 'sentiment-classification': The dataset can be used to train a model for sentiment classification. The model performance is evaluated based on the accuracy of the predicted labels as compared to the given labels in the dataset. A BERT-based model, tf-allociné, achieves 97.44% accuracy on the test set. ### Languages The text is in French, as spoken by users of the Allociné.fr website. The BCP-47 code for French is fr. Dataset Structure ----------------- ### Data Instances Each data instance contains the following features: *review* and *label*. In the Hugging Face distribution of the dataset, the *label* has 2 possible values, *0* and *1*, which correspond to *negative* and *positive* respectively. See the Allociné corpus viewer to explore more examples. An example from the Allociné train set looks like the following: ### Data Fields * 'review': a string containing the review text * 'label': an integer, either *0* or *1*, indicating a *negative* or *positive* review, respectively ### Data Splits The Allociné dataset has 3 splits: *train*, *validation*, and *test*. The splits contain disjoint sets of movies. The following table contains the number of reviews in each split and the percentage of positive and negative reviews. Dataset Creation ---------------- ### Curation Rationale The Allociné dataset was developed to support large-scale sentiment analysis in French. It was released alongside the tf-allociné model and used to compare the performance of several language models on this task. ### Source Data #### Initial Data Collection and Normalization The reviews and ratings were collected using a list of film page urls and the allocine\_scraper.py tool. Up to 30 reviews were collected for each film. The reviews were originally labeled with a rating from 0.5 to 5.0 with a step of 0.5 between each rating. Ratings less than or equal to 2 are labeled as negative and ratings greater than or equal to 4 are labeled as positive. Only reviews with less than 2000 characters are included in the dataset. #### Who are the source language producers? The dataset contains movie reviews produced by the online community of the Allociné.fr website. ### Annotations The dataset does not contain any additional annotations. #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information Reviewer usernames or personal information were not collected with the reviews, but could potentially be recovered. The content of each review may include information and opinions about the film's actors, film crew, and plot. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset Sentiment classification is a complex task which requires sophisticated language understanding skills. Successful models can support decision-making based on the outcome of the sentiment analysis, though such models currently require a high degree of domain specificity. It should be noted that the community represented in the dataset may not represent any downstream application's potential users, and the observed behavior of a model trained on this dataset may vary based on the domain and use case. ### Discussion of Biases The Allociné website lists a number of topics which violate their terms of service. Further analysis is needed to determine the extent to which moderators have successfully removed such content. ### Other Known Limitations The limitations of the Allociné dataset have not yet been investigated, however Staliūnaitė and Bonfil (2017) detail linguistic phenomena that are generally present in sentiment analysis but difficult for models to accurately label, such as negation, adverbial modifiers, and reviewer pragmatics. Additional Information ---------------------- ### Dataset Curators The Allociné dataset was collected by Théophile Blard. ### Licensing Information The Allociné dataset is licensed under the MIT License. > > Théophile Blard, French sentiment analysis with BERT, (2020), GitHub repository, <URL > > > ### Contributions Thanks to @thomwolf, @TheophileBlard, @lewtun and @mcmillanmajora for adding this dataset.
[ "### Dataset Summary\n\n\nThe Allociné dataset is a French-language dataset for sentiment analysis. The texts are movie reviews written between 2006 and 2020 by members of the Allociné.fr community for various films. It contains 100k positive and 100k negative reviews divided into train (160k), validation (20k), and test (20k).", "### Supported Tasks and Leaderboards\n\n\n* 'text-classification', 'sentiment-classification': The dataset can be used to train a model for sentiment classification. The model performance is evaluated based on the accuracy of the predicted labels as compared to the given labels in the dataset. A BERT-based model, tf-allociné, achieves 97.44% accuracy on the test set.", "### Languages\n\n\nThe text is in French, as spoken by users of the Allociné.fr website. The BCP-47 code for French is fr.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nEach data instance contains the following features: *review* and *label*. In the Hugging Face distribution of the dataset, the *label* has 2 possible values, *0* and *1*, which correspond to *negative* and *positive* respectively. See the Allociné corpus viewer to explore more examples.\n\n\nAn example from the Allociné train set looks like the following:", "### Data Fields\n\n\n* 'review': a string containing the review text\n* 'label': an integer, either *0* or *1*, indicating a *negative* or *positive* review, respectively", "### Data Splits\n\n\nThe Allociné dataset has 3 splits: *train*, *validation*, and *test*. The splits contain disjoint sets of movies. The following table contains the number of reviews in each split and the percentage of positive and negative reviews.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe Allociné dataset was developed to support large-scale sentiment analysis in French. It was released alongside the tf-allociné model and used to compare the performance of several language models on this task.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe reviews and ratings were collected using a list of film page urls and the allocine\\_scraper.py tool. Up to 30 reviews were collected for each film.\n\n\nThe reviews were originally labeled with a rating from 0.5 to 5.0 with a step of 0.5 between each rating. Ratings less than or equal to 2 are labeled as negative and ratings greater than or equal to 4 are labeled as positive. Only reviews with less than 2000 characters are included in the dataset.", "#### Who are the source language producers?\n\n\nThe dataset contains movie reviews produced by the online community of the Allociné.fr website.", "### Annotations\n\n\nThe dataset does not contain any additional annotations.", "#### Annotation process\n\n\n[N/A]", "#### Who are the annotators?\n\n\n[N/A]", "### Personal and Sensitive Information\n\n\nReviewer usernames or personal information were not collected with the reviews, but could potentially be recovered. The content of each review may include information and opinions about the film's actors, film crew, and plot.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nSentiment classification is a complex task which requires sophisticated language understanding skills. Successful models can support decision-making based on the outcome of the sentiment analysis, though such models currently require a high degree of domain specificity.\n\n\nIt should be noted that the community represented in the dataset may not represent any downstream application's potential users, and the observed behavior of a model trained on this dataset may vary based on the domain and use case.", "### Discussion of Biases\n\n\nThe Allociné website lists a number of topics which violate their terms of service. Further analysis is needed to determine the extent to which moderators have successfully removed such content.", "### Other Known Limitations\n\n\nThe limitations of the Allociné dataset have not yet been investigated, however Staliūnaitė and Bonfil (2017) detail linguistic phenomena that are generally present in sentiment analysis but difficult for models to accurately label, such as negation, adverbial modifiers, and reviewer pragmatics.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe Allociné dataset was collected by Théophile Blard.", "### Licensing Information\n\n\nThe Allociné dataset is licensed under the MIT License.\n\n\n\n> \n> Théophile Blard, French sentiment analysis with BERT, (2020), GitHub repository, <URL\n> \n> \n>", "### Contributions\n\n\nThanks to @thomwolf, @TheophileBlard, @lewtun and @mcmillanmajora for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-French #license-mit #region-us \n", "### Dataset Summary\n\n\nThe Allociné dataset is a French-language dataset for sentiment analysis. The texts are movie reviews written between 2006 and 2020 by members of the Allociné.fr community for various films. It contains 100k positive and 100k negative reviews divided into train (160k), validation (20k), and test (20k).", "### Supported Tasks and Leaderboards\n\n\n* 'text-classification', 'sentiment-classification': The dataset can be used to train a model for sentiment classification. The model performance is evaluated based on the accuracy of the predicted labels as compared to the given labels in the dataset. A BERT-based model, tf-allociné, achieves 97.44% accuracy on the test set.", "### Languages\n\n\nThe text is in French, as spoken by users of the Allociné.fr website. The BCP-47 code for French is fr.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nEach data instance contains the following features: *review* and *label*. In the Hugging Face distribution of the dataset, the *label* has 2 possible values, *0* and *1*, which correspond to *negative* and *positive* respectively. See the Allociné corpus viewer to explore more examples.\n\n\nAn example from the Allociné train set looks like the following:", "### Data Fields\n\n\n* 'review': a string containing the review text\n* 'label': an integer, either *0* or *1*, indicating a *negative* or *positive* review, respectively", "### Data Splits\n\n\nThe Allociné dataset has 3 splits: *train*, *validation*, and *test*. The splits contain disjoint sets of movies. The following table contains the number of reviews in each split and the percentage of positive and negative reviews.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe Allociné dataset was developed to support large-scale sentiment analysis in French. It was released alongside the tf-allociné model and used to compare the performance of several language models on this task.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe reviews and ratings were collected using a list of film page urls and the allocine\\_scraper.py tool. Up to 30 reviews were collected for each film.\n\n\nThe reviews were originally labeled with a rating from 0.5 to 5.0 with a step of 0.5 between each rating. Ratings less than or equal to 2 are labeled as negative and ratings greater than or equal to 4 are labeled as positive. Only reviews with less than 2000 characters are included in the dataset.", "#### Who are the source language producers?\n\n\nThe dataset contains movie reviews produced by the online community of the Allociné.fr website.", "### Annotations\n\n\nThe dataset does not contain any additional annotations.", "#### Annotation process\n\n\n[N/A]", "#### Who are the annotators?\n\n\n[N/A]", "### Personal and Sensitive Information\n\n\nReviewer usernames or personal information were not collected with the reviews, but could potentially be recovered. The content of each review may include information and opinions about the film's actors, film crew, and plot.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nSentiment classification is a complex task which requires sophisticated language understanding skills. Successful models can support decision-making based on the outcome of the sentiment analysis, though such models currently require a high degree of domain specificity.\n\n\nIt should be noted that the community represented in the dataset may not represent any downstream application's potential users, and the observed behavior of a model trained on this dataset may vary based on the domain and use case.", "### Discussion of Biases\n\n\nThe Allociné website lists a number of topics which violate their terms of service. Further analysis is needed to determine the extent to which moderators have successfully removed such content.", "### Other Known Limitations\n\n\nThe limitations of the Allociné dataset have not yet been investigated, however Staliūnaitė and Bonfil (2017) detail linguistic phenomena that are generally present in sentiment analysis but difficult for models to accurately label, such as negation, adverbial modifiers, and reviewer pragmatics.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe Allociné dataset was collected by Théophile Blard.", "### Licensing Information\n\n\nThe Allociné dataset is licensed under the MIT License.\n\n\n\n> \n> Théophile Blard, French sentiment analysis with BERT, (2020), GitHub repository, <URL\n> \n> \n>", "### Contributions\n\n\nThanks to @thomwolf, @TheophileBlard, @lewtun and @mcmillanmajora for adding this dataset." ]
[ 88, 77, 99, 41, 94, 51, 71, 53, 4, 116, 31, 17, 10, 14, 66, 106, 48, 80, 24, 52, 38 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-French #license-mit #region-us \n### Dataset Summary\n\n\nThe Allociné dataset is a French-language dataset for sentiment analysis. The texts are movie reviews written between 2006 and 2020 by members of the Allociné.fr community for various films. It contains 100k positive and 100k negative reviews divided into train (160k), validation (20k), and test (20k).### Supported Tasks and Leaderboards\n\n\n* 'text-classification', 'sentiment-classification': The dataset can be used to train a model for sentiment classification. The model performance is evaluated based on the accuracy of the predicted labels as compared to the given labels in the dataset. A BERT-based model, tf-allociné, achieves 97.44% accuracy on the test set.### Languages\n\n\nThe text is in French, as spoken by users of the Allociné.fr website. The BCP-47 code for French is fr.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nEach data instance contains the following features: *review* and *label*. In the Hugging Face distribution of the dataset, the *label* has 2 possible values, *0* and *1*, which correspond to *negative* and *positive* respectively. See the Allociné corpus viewer to explore more examples.\n\n\nAn example from the Allociné train set looks like the following:### Data Fields\n\n\n* 'review': a string containing the review text\n* 'label': an integer, either *0* or *1*, indicating a *negative* or *positive* review, respectively", "passage: ### Data Splits\n\n\nThe Allociné dataset has 3 splits: *train*, *validation*, and *test*. The splits contain disjoint sets of movies. The following table contains the number of reviews in each split and the percentage of positive and negative reviews.\n\n\n\nDataset Creation\n----------------### Curation Rationale\n\n\nThe Allociné dataset was developed to support large-scale sentiment analysis in French. It was released alongside the tf-allociné model and used to compare the performance of several language models on this task.### Source Data#### Initial Data Collection and Normalization\n\n\nThe reviews and ratings were collected using a list of film page urls and the allocine\\_scraper.py tool. Up to 30 reviews were collected for each film.\n\n\nThe reviews were originally labeled with a rating from 0.5 to 5.0 with a step of 0.5 between each rating. Ratings less than or equal to 2 are labeled as negative and ratings greater than or equal to 4 are labeled as positive. Only reviews with less than 2000 characters are included in the dataset.#### Who are the source language producers?\n\n\nThe dataset contains movie reviews produced by the online community of the Allociné.fr website.### Annotations\n\n\nThe dataset does not contain any additional annotations.#### Annotation process\n\n\n[N/A]#### Who are the annotators?\n\n\n[N/A]### Personal and Sensitive Information\n\n\nReviewer usernames or personal information were not collected with the reviews, but could potentially be recovered. The content of each review may include information and opinions about the film's actors, film crew, and plot.\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset\n\n\nSentiment classification is a complex task which requires sophisticated language understanding skills. Successful models can support decision-making based on the outcome of the sentiment analysis, though such models currently require a high degree of domain specificity.\n\n\nIt should be noted that the community represented in the dataset may not represent any downstream application's potential users, and the observed behavior of a model trained on this dataset may vary based on the domain and use case.### Discussion of Biases\n\n\nThe Allociné website lists a number of topics which violate their terms of service. Further analysis is needed to determine the extent to which moderators have successfully removed such content." ]
afbd92e198bbcf17f660e03076fd2938f5a4bbb2
# Dataset Card for Asian Language Treebank (ALT) ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/ - **Leaderboard:** - **Paper:** [Introduction of the Asian Language Treebank](https://ieeexplore.ieee.org/abstract/document/7918974) - **Point of Contact:** [ALT info](alt-info@khn.nict.go.jp) ### Dataset Summary The ALT project aims to advance the state-of-the-art Asian natural language processing (NLP) techniques through the open collaboration for developing and using ALT. It was first conducted by NICT and UCSY as described in Ye Kyaw Thu, Win Pa Pa, Masao Utiyama, Andrew Finch and Eiichiro Sumita (2016). Then, it was developed under [ASEAN IVO](https://www.nict.go.jp/en/asean_ivo/index.html) as described in this Web page. The process of building ALT began with sampling about 20,000 sentences from English Wikinews, and then these sentences were translated into the other languages. ### Supported Tasks and Leaderboards Machine Translation, Dependency Parsing ### Languages It supports 13 language: * Bengali * English * Filipino * Hindi * Bahasa Indonesia * Japanese * Khmer * Lao * Malay * Myanmar (Burmese) * Thai * Vietnamese * Chinese (Simplified Chinese). ## Dataset Structure ### Data Instances #### ALT Parallel Corpus ``` { "SNT.URLID": "80188", "SNT.URLID.SNTID": "1", "url": "http://en.wikinews.org/wiki/2007_Rugby_World_Cup:_Italy_31_-_5_Portugal", "bg": "[translated sentence]", "en": "[translated sentence]", "en_tok": "[translated sentence]", "fil": "[translated sentence]", "hi": "[translated sentence]", "id": "[translated sentence]", "ja": "[translated sentence]", "khm": "[translated sentence]", "lo": "[translated sentence]", "ms": "[translated sentence]", "my": "[translated sentence]", "th": "[translated sentence]", "vi": "[translated sentence]", "zh": "[translated sentence]" } ``` #### ALT Treebank ``` { "SNT.URLID": "80188", "SNT.URLID.SNTID": "1", "url": "http://en.wikinews.org/wiki/2007_Rugby_World_Cup:_Italy_31_-_5_Portugal", "status": "draft/reviewed", "value": "(S (S (BASENP (NNP Italy)) (VP (VBP have) (VP (VP (VP (VBN defeated) (BASENP (NNP Portugal))) (ADVP (RB 31-5))) (PP (IN in) (NP (BASENP (NNP Pool) (NNP C)) (PP (IN of) (NP (BASENP (DT the) (NN 2007) (NNP Rugby) (NNP World) (NNP Cup)) (PP (IN at) (NP (BASENP (NNP Parc) (FW des) (NNP Princes)) (COMMA ,) (BASENP (NNP Paris) (COMMA ,) (NNP France))))))))))) (PERIOD .))" } ``` #### ALT Myanmar transliteration ``` { "en": "CASINO", "my": [ "ကက်စီနို", "ကစီနို", "ကာစီနို", "ကာဆီနို" ] } ``` ### Data Fields #### ALT Parallel Corpus - SNT.URLID: URL link to the source article listed in [URL.txt](https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/ALT-Parallel-Corpus-20191206/URL.txt) - SNT.URLID.SNTID: index number from 1 to 20000. It is a seletected sentence from `SNT.URLID` and bg, en, fil, hi, id, ja, khm, lo, ms, my, th, vi, zh correspond to the target language #### ALT Treebank - status: it indicates how a sentence is annotated; `draft` sentences are annotated by one annotater and `reviewed` sentences are annotated by two annotater The annotatation is different from language to language, please see [their guildlines](https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/) for more detail. ### Data Splits | | train | valid | test | |-----------|-------|-------|-------| | # articles | 1698 | 98 | 97 | | # sentences | 18088 | 1000 | 1018 | ## Dataset Creation ### Curation Rationale The ALT project was initiated by the [National Institute of Information and Communications Technology, Japan](https://www.nict.go.jp/en/) (NICT) in 2014. NICT started to build Japanese and English ALT and worked with the University of Computer Studies, Yangon, Myanmar (UCSY) to build Myanmar ALT in 2014. Then, the Badan Pengkajian dan Penerapan Teknologi, Indonesia (BPPT), the Institute for Infocomm Research, Singapore (I2R), the Institute of Information Technology, Vietnam (IOIT), and the National Institute of Posts, Telecoms and ICT, Cambodia (NIPTICT) joined to make ALT for Indonesian, Malay, Vietnamese, and Khmer in 2015. ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? The dataset is sampled from the English Wikinews in 2014. These will be annotated with word segmentation, POS tags, and syntax information, in addition to the word alignment information by linguistic experts from * National Institute of Information and Communications Technology, Japan (NICT) for Japanses and English * University of Computer Studies, Yangon, Myanmar (UCSY) for Myanmar * the Badan Pengkajian dan Penerapan Teknologi, Indonesia (BPPT) for Indonesian * the Institute for Infocomm Research, Singapore (I2R) for Malay * the Institute of Information Technology, Vietnam (IOIT) for Vietnamese * the National Institute of Posts, Telecoms and ICT, Cambodia for Khmer ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators * National Institute of Information and Communications Technology, Japan (NICT) for Japanses and English * University of Computer Studies, Yangon, Myanmar (UCSY) for Myanmar * the Badan Pengkajian dan Penerapan Teknologi, Indonesia (BPPT) for Indonesian * the Institute for Infocomm Research, Singapore (I2R) for Malay * the Institute of Information Technology, Vietnam (IOIT) for Vietnamese * the National Institute of Posts, Telecoms and ICT, Cambodia for Khmer ### Licensing Information [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) ### Citation Information Please cite the following if you make use of the dataset: Hammam Riza, Michael Purwoadi, Gunarso, Teduh Uliniansyah, Aw Ai Ti, Sharifah Mahani Aljunied, Luong Chi Mai, Vu Tat Thang, Nguyen Phuong Thai, Vichet Chea, Rapid Sun, Sethserey Sam, Sopheap Seng, Khin Mar Soe, Khin Thandar Nwet, Masao Utiyama, Chenchen Ding. (2016) "Introduction of the Asian Language Treebank" Oriental COCOSDA. BibTeX: ``` @inproceedings{riza2016introduction, title={Introduction of the asian language treebank}, author={Riza, Hammam and Purwoadi, Michael and Uliniansyah, Teduh and Ti, Aw Ai and Aljunied, Sharifah Mahani and Mai, Luong Chi and Thang, Vu Tat and Thai, Nguyen Phuong and Chea, Vichet and Sam, Sethserey and others}, booktitle={2016 Conference of The Oriental Chapter of International Committee for Coordination and Standardization of Speech Databases and Assessment Techniques (O-COCOSDA)}, pages={1--6}, year={2016}, organization={IEEE} } ``` ### Contributions Thanks to [@chameleonTK](https://github.com/chameleonTK) for adding this dataset.
alt
[ "task_categories:translation", "task_categories:token-classification", "task_ids:parsing", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:multilingual", "multilinguality:translation", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "source_datasets:original", "language:bn", "language:en", "language:fil", "language:hi", "language:id", "language:ja", "language:km", "language:lo", "language:ms", "language:my", "language:th", "language:vi", "language:zh", "license:cc-by-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced"], "language": ["bn", "en", "fil", "hi", "id", "ja", "km", "lo", "ms", "my", "th", "vi", "zh"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual", "translation"], "size_categories": ["100K<n<1M", "10K<n<100K"], "source_datasets": ["original"], "task_categories": ["translation", "token-classification"], "task_ids": ["parsing"], "paperswithcode_id": "alt", "pretty_name": "Asian Language Treebank", "config_names": ["alt-en", "alt-jp", "alt-km", "alt-my", "alt-my-transliteration", "alt-my-west-transliteration", "alt-parallel"], "dataset_info": [{"config_name": "alt-en", "features": [{"name": "SNT.URLID", "dtype": "string"}, {"name": "SNT.URLID.SNTID", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "status", "dtype": "string"}, {"name": "value", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10075569, "num_examples": 17889}, {"name": "validation", "num_bytes": 544719, "num_examples": 988}, {"name": "test", "num_bytes": 567272, "num_examples": 1017}], "download_size": 3781814, "dataset_size": 11187560}, {"config_name": "alt-jp", "features": [{"name": "SNT.URLID", "dtype": "string"}, {"name": "SNT.URLID.SNTID", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "status", "dtype": "string"}, {"name": "value", "dtype": "string"}, {"name": "word_alignment", "dtype": "string"}, {"name": "jp_tokenized", "dtype": "string"}, {"name": "en_tokenized", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21888277, "num_examples": 17202}, {"name": "validation", "num_bytes": 1181555, "num_examples": 953}, {"name": "test", "num_bytes": 1175592, "num_examples": 931}], "download_size": 10355366, "dataset_size": 24245424}, {"config_name": "alt-km", "features": [{"name": "SNT.URLID", "dtype": "string"}, {"name": "SNT.URLID.SNTID", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "km_pos_tag", "dtype": "string"}, {"name": "km_tokenized", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12015371, "num_examples": 18088}, {"name": "validation", "num_bytes": 655212, "num_examples": 1000}, {"name": "test", "num_bytes": 673733, "num_examples": 1018}], "download_size": 4344096, "dataset_size": 13344316}, {"config_name": "alt-my", "features": [{"name": "SNT.URLID", "dtype": "string"}, {"name": "SNT.URLID.SNTID", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "value", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 20433243, "num_examples": 18088}, {"name": "validation", "num_bytes": 1111394, "num_examples": 1000}, {"name": "test", "num_bytes": 1135193, "num_examples": 1018}], "download_size": 6569025, "dataset_size": 22679830}, {"config_name": "alt-my-transliteration", "features": [{"name": "en", "dtype": "string"}, {"name": "my", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 4249316, "num_examples": 84022}], "download_size": 2163951, "dataset_size": 4249316}, {"config_name": "alt-my-west-transliteration", "features": [{"name": "en", "dtype": "string"}, {"name": "my", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 7411911, "num_examples": 107121}], "download_size": 2857511, "dataset_size": 7411911}, {"config_name": "alt-parallel", "features": [{"name": "SNT.URLID", "dtype": "string"}, {"name": "SNT.URLID.SNTID", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "translation", "dtype": {"translation": {"languages": ["bg", "en", "en_tok", "fil", "hi", "id", "ja", "khm", "lo", "ms", "my", "th", "vi", "zh"]}}}], "splits": [{"name": "train", "num_bytes": 68445916, "num_examples": 18088}, {"name": "validation", "num_bytes": 3710979, "num_examples": 1000}, {"name": "test", "num_bytes": 3814431, "num_examples": 1019}], "download_size": 34707907, "dataset_size": 75971326}], "configs": [{"config_name": "alt-en", "data_files": [{"split": "train", "path": "alt-en/train-*"}, {"split": "validation", "path": "alt-en/validation-*"}, {"split": "test", "path": "alt-en/test-*"}]}, {"config_name": "alt-jp", "data_files": [{"split": "train", "path": "alt-jp/train-*"}, {"split": "validation", "path": "alt-jp/validation-*"}, {"split": "test", "path": "alt-jp/test-*"}]}, {"config_name": "alt-km", "data_files": [{"split": "train", "path": "alt-km/train-*"}, {"split": "validation", "path": "alt-km/validation-*"}, {"split": "test", "path": "alt-km/test-*"}]}, {"config_name": "alt-my", "data_files": [{"split": "train", "path": "alt-my/train-*"}, {"split": "validation", "path": "alt-my/validation-*"}, {"split": "test", "path": "alt-my/test-*"}]}, {"config_name": "alt-my-transliteration", "data_files": [{"split": "train", "path": "alt-my-transliteration/train-*"}]}, {"config_name": "alt-my-west-transliteration", "data_files": [{"split": "train", "path": "alt-my-west-transliteration/train-*"}]}, {"config_name": "alt-parallel", "data_files": [{"split": "train", "path": "alt-parallel/train-*"}, {"split": "validation", "path": "alt-parallel/validation-*"}, {"split": "test", "path": "alt-parallel/test-*"}], "default": true}]}
2024-01-09T12:07:24+00:00
[]
[ "bn", "en", "fil", "hi", "id", "ja", "km", "lo", "ms", "my", "th", "vi", "zh" ]
TAGS #task_categories-translation #task_categories-token-classification #task_ids-parsing #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-multilingual #multilinguality-translation #size_categories-100K<n<1M #size_categories-10K<n<100K #source_datasets-original #language-Bengali #language-English #language-Filipino #language-Hindi #language-Indonesian #language-Japanese #language-Khmer #language-Lao #language-Malay (macrolanguage) #language-Burmese #language-Thai #language-Vietnamese #language-Chinese #license-cc-by-4.0 #region-us
Dataset Card for Asian Language Treebank (ALT) ============================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Leaderboard: * Paper: Introduction of the Asian Language Treebank * Point of Contact: ALT info ### Dataset Summary The ALT project aims to advance the state-of-the-art Asian natural language processing (NLP) techniques through the open collaboration for developing and using ALT. It was first conducted by NICT and UCSY as described in Ye Kyaw Thu, Win Pa Pa, Masao Utiyama, Andrew Finch and Eiichiro Sumita (2016). Then, it was developed under ASEAN IVO as described in this Web page. The process of building ALT began with sampling about 20,000 sentences from English Wikinews, and then these sentences were translated into the other languages. ### Supported Tasks and Leaderboards Machine Translation, Dependency Parsing ### Languages It supports 13 language: * Bengali * English * Filipino * Hindi * Bahasa Indonesia * Japanese * Khmer * Lao * Malay * Myanmar (Burmese) * Thai * Vietnamese * Chinese (Simplified Chinese). Dataset Structure ----------------- ### Data Instances #### ALT Parallel Corpus #### ALT Treebank #### ALT Myanmar transliteration ### Data Fields #### ALT Parallel Corpus * SNT.URLID: URL link to the source article listed in URL * SNT.URLID.SNTID: index number from 1 to 20000. It is a seletected sentence from 'SNT.URLID' and bg, en, fil, hi, id, ja, khm, lo, ms, my, th, vi, zh correspond to the target language #### ALT Treebank * status: it indicates how a sentence is annotated; 'draft' sentences are annotated by one annotater and 'reviewed' sentences are annotated by two annotater The annotatation is different from language to language, please see their guildlines for more detail. ### Data Splits Dataset Creation ---------------- ### Curation Rationale The ALT project was initiated by the National Institute of Information and Communications Technology, Japan (NICT) in 2014. NICT started to build Japanese and English ALT and worked with the University of Computer Studies, Yangon, Myanmar (UCSY) to build Myanmar ALT in 2014. Then, the Badan Pengkajian dan Penerapan Teknologi, Indonesia (BPPT), the Institute for Infocomm Research, Singapore (I2R), the Institute of Information Technology, Vietnam (IOIT), and the National Institute of Posts, Telecoms and ICT, Cambodia (NIPTICT) joined to make ALT for Indonesian, Malay, Vietnamese, and Khmer in 2015. ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? The dataset is sampled from the English Wikinews in 2014. These will be annotated with word segmentation, POS tags, and syntax information, in addition to the word alignment information by linguistic experts from * National Institute of Information and Communications Technology, Japan (NICT) for Japanses and English * University of Computer Studies, Yangon, Myanmar (UCSY) for Myanmar * the Badan Pengkajian dan Penerapan Teknologi, Indonesia (BPPT) for Indonesian * the Institute for Infocomm Research, Singapore (I2R) for Malay * the Institute of Information Technology, Vietnam (IOIT) for Vietnamese * the National Institute of Posts, Telecoms and ICT, Cambodia for Khmer ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators * National Institute of Information and Communications Technology, Japan (NICT) for Japanses and English * University of Computer Studies, Yangon, Myanmar (UCSY) for Myanmar * the Badan Pengkajian dan Penerapan Teknologi, Indonesia (BPPT) for Indonesian * the Institute for Infocomm Research, Singapore (I2R) for Malay * the Institute of Information Technology, Vietnam (IOIT) for Vietnamese * the National Institute of Posts, Telecoms and ICT, Cambodia for Khmer ### Licensing Information Creative Commons Attribution 4.0 International (CC BY 4.0) Please cite the following if you make use of the dataset: Hammam Riza, Michael Purwoadi, Gunarso, Teduh Uliniansyah, Aw Ai Ti, Sharifah Mahani Aljunied, Luong Chi Mai, Vu Tat Thang, Nguyen Phuong Thai, Vichet Chea, Rapid Sun, Sethserey Sam, Sopheap Seng, Khin Mar Soe, Khin Thandar Nwet, Masao Utiyama, Chenchen Ding. (2016) "Introduction of the Asian Language Treebank" Oriental COCOSDA. BibTeX: ### Contributions Thanks to @chameleonTK for adding this dataset.
[ "### Dataset Summary\n\n\nThe ALT project aims to advance the state-of-the-art Asian natural language processing (NLP) techniques through the open collaboration for developing and using ALT. It was first conducted by NICT and UCSY as described in Ye Kyaw Thu, Win Pa Pa, Masao Utiyama, Andrew Finch and Eiichiro Sumita (2016). Then, it was developed under ASEAN IVO as described in this Web page.\n\n\nThe process of building ALT began with sampling about 20,000 sentences from English Wikinews, and then these sentences were translated into the other languages.", "### Supported Tasks and Leaderboards\n\n\nMachine Translation, Dependency Parsing", "### Languages\n\n\nIt supports 13 language:\n\n\n* Bengali\n* English\n* Filipino\n* Hindi\n* Bahasa Indonesia\n* Japanese\n* Khmer\n* Lao\n* Malay\n* Myanmar (Burmese)\n* Thai\n* Vietnamese\n* Chinese (Simplified Chinese).\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### ALT Parallel Corpus", "#### ALT Treebank", "#### ALT Myanmar transliteration", "### Data Fields", "#### ALT Parallel Corpus\n\n\n* SNT.URLID: URL link to the source article listed in URL\n* SNT.URLID.SNTID: index number from 1 to 20000. It is a seletected sentence from 'SNT.URLID'\n\n\nand bg, en, fil, hi, id, ja, khm, lo, ms, my, th, vi, zh correspond to the target language", "#### ALT Treebank\n\n\n* status: it indicates how a sentence is annotated; 'draft' sentences are annotated by one annotater and 'reviewed' sentences are annotated by two annotater\n\n\nThe annotatation is different from language to language, please see their guildlines for more detail.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe ALT project was initiated by the National Institute of Information and Communications Technology, Japan (NICT) in 2014. NICT started to build Japanese and English ALT and worked with the University of Computer Studies, Yangon, Myanmar (UCSY) to build Myanmar ALT in 2014. Then, the Badan Pengkajian dan Penerapan Teknologi, Indonesia (BPPT), the Institute for Infocomm Research, Singapore (I2R), the Institute of Information Technology, Vietnam (IOIT), and the National Institute of Posts, Telecoms and ICT, Cambodia (NIPTICT) joined to make ALT for Indonesian, Malay, Vietnamese, and Khmer in 2015.", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?\n\n\nThe dataset is sampled from the English Wikinews in 2014. These will be annotated with word segmentation, POS tags, and syntax information, in addition to the word alignment information by linguistic experts from\n\n\n* National Institute of Information and Communications Technology, Japan (NICT) for Japanses and English\n* University of Computer Studies, Yangon, Myanmar (UCSY) for Myanmar\n* the Badan Pengkajian dan Penerapan Teknologi, Indonesia (BPPT) for Indonesian\n* the Institute for Infocomm Research, Singapore (I2R) for Malay\n* the Institute of Information Technology, Vietnam (IOIT) for Vietnamese\n* the National Institute of Posts, Telecoms and ICT, Cambodia for Khmer", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\n* National Institute of Information and Communications Technology, Japan (NICT) for Japanses and English\n* University of Computer Studies, Yangon, Myanmar (UCSY) for Myanmar\n* the Badan Pengkajian dan Penerapan Teknologi, Indonesia (BPPT) for Indonesian\n* the Institute for Infocomm Research, Singapore (I2R) for Malay\n* the Institute of Information Technology, Vietnam (IOIT) for Vietnamese\n* the National Institute of Posts, Telecoms and ICT, Cambodia for Khmer", "### Licensing Information\n\n\nCreative Commons Attribution 4.0 International (CC BY 4.0)\n\n\nPlease cite the following if you make use of the dataset:\n\n\nHammam Riza, Michael Purwoadi, Gunarso, Teduh Uliniansyah, Aw Ai Ti, Sharifah Mahani Aljunied, Luong Chi Mai, Vu Tat Thang, Nguyen Phuong Thai, Vichet Chea, Rapid Sun, Sethserey Sam, Sopheap Seng, Khin Mar Soe, Khin Thandar Nwet, Masao Utiyama, Chenchen Ding. (2016) \"Introduction of the Asian Language Treebank\" Oriental COCOSDA.\n\n\nBibTeX:", "### Contributions\n\n\nThanks to @chameleonTK for adding this dataset." ]
[ "TAGS\n#task_categories-translation #task_categories-token-classification #task_ids-parsing #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-multilingual #multilinguality-translation #size_categories-100K<n<1M #size_categories-10K<n<100K #source_datasets-original #language-Bengali #language-English #language-Filipino #language-Hindi #language-Indonesian #language-Japanese #language-Khmer #language-Lao #language-Malay (macrolanguage) #language-Burmese #language-Thai #language-Vietnamese #language-Chinese #license-cc-by-4.0 #region-us \n", "### Dataset Summary\n\n\nThe ALT project aims to advance the state-of-the-art Asian natural language processing (NLP) techniques through the open collaboration for developing and using ALT. It was first conducted by NICT and UCSY as described in Ye Kyaw Thu, Win Pa Pa, Masao Utiyama, Andrew Finch and Eiichiro Sumita (2016). Then, it was developed under ASEAN IVO as described in this Web page.\n\n\nThe process of building ALT began with sampling about 20,000 sentences from English Wikinews, and then these sentences were translated into the other languages.", "### Supported Tasks and Leaderboards\n\n\nMachine Translation, Dependency Parsing", "### Languages\n\n\nIt supports 13 language:\n\n\n* Bengali\n* English\n* Filipino\n* Hindi\n* Bahasa Indonesia\n* Japanese\n* Khmer\n* Lao\n* Malay\n* Myanmar (Burmese)\n* Thai\n* Vietnamese\n* Chinese (Simplified Chinese).\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### ALT Parallel Corpus", "#### ALT Treebank", "#### ALT Myanmar transliteration", "### Data Fields", "#### ALT Parallel Corpus\n\n\n* SNT.URLID: URL link to the source article listed in URL\n* SNT.URLID.SNTID: index number from 1 to 20000. It is a seletected sentence from 'SNT.URLID'\n\n\nand bg, en, fil, hi, id, ja, khm, lo, ms, my, th, vi, zh correspond to the target language", "#### ALT Treebank\n\n\n* status: it indicates how a sentence is annotated; 'draft' sentences are annotated by one annotater and 'reviewed' sentences are annotated by two annotater\n\n\nThe annotatation is different from language to language, please see their guildlines for more detail.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe ALT project was initiated by the National Institute of Information and Communications Technology, Japan (NICT) in 2014. NICT started to build Japanese and English ALT and worked with the University of Computer Studies, Yangon, Myanmar (UCSY) to build Myanmar ALT in 2014. Then, the Badan Pengkajian dan Penerapan Teknologi, Indonesia (BPPT), the Institute for Infocomm Research, Singapore (I2R), the Institute of Information Technology, Vietnam (IOIT), and the National Institute of Posts, Telecoms and ICT, Cambodia (NIPTICT) joined to make ALT for Indonesian, Malay, Vietnamese, and Khmer in 2015.", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?\n\n\nThe dataset is sampled from the English Wikinews in 2014. These will be annotated with word segmentation, POS tags, and syntax information, in addition to the word alignment information by linguistic experts from\n\n\n* National Institute of Information and Communications Technology, Japan (NICT) for Japanses and English\n* University of Computer Studies, Yangon, Myanmar (UCSY) for Myanmar\n* the Badan Pengkajian dan Penerapan Teknologi, Indonesia (BPPT) for Indonesian\n* the Institute for Infocomm Research, Singapore (I2R) for Malay\n* the Institute of Information Technology, Vietnam (IOIT) for Vietnamese\n* the National Institute of Posts, Telecoms and ICT, Cambodia for Khmer", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\n* National Institute of Information and Communications Technology, Japan (NICT) for Japanses and English\n* University of Computer Studies, Yangon, Myanmar (UCSY) for Myanmar\n* the Badan Pengkajian dan Penerapan Teknologi, Indonesia (BPPT) for Indonesian\n* the Institute for Infocomm Research, Singapore (I2R) for Malay\n* the Institute of Information Technology, Vietnam (IOIT) for Vietnamese\n* the National Institute of Posts, Telecoms and ICT, Cambodia for Khmer", "### Licensing Information\n\n\nCreative Commons Attribution 4.0 International (CC BY 4.0)\n\n\nPlease cite the following if you make use of the dataset:\n\n\nHammam Riza, Michael Purwoadi, Gunarso, Teduh Uliniansyah, Aw Ai Ti, Sharifah Mahani Aljunied, Luong Chi Mai, Vu Tat Thang, Nguyen Phuong Thai, Vichet Chea, Rapid Sun, Sethserey Sam, Sopheap Seng, Khin Mar Soe, Khin Thandar Nwet, Masao Utiyama, Chenchen Ding. (2016) \"Introduction of the Asian Language Treebank\" Oriental COCOSDA.\n\n\nBibTeX:", "### Contributions\n\n\nThanks to @chameleonTK for adding this dataset." ]
[ 185, 131, 18, 55, 6, 5, 5, 7, 5, 89, 71, 11, 143, 4, 10, 160, 5, 5, 9, 18, 7, 8, 14, 110, 143, 18 ]
[ "passage: TAGS\n#task_categories-translation #task_categories-token-classification #task_ids-parsing #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-multilingual #multilinguality-translation #size_categories-100K<n<1M #size_categories-10K<n<100K #source_datasets-original #language-Bengali #language-English #language-Filipino #language-Hindi #language-Indonesian #language-Japanese #language-Khmer #language-Lao #language-Malay (macrolanguage) #language-Burmese #language-Thai #language-Vietnamese #language-Chinese #license-cc-by-4.0 #region-us \n### Dataset Summary\n\n\nThe ALT project aims to advance the state-of-the-art Asian natural language processing (NLP) techniques through the open collaboration for developing and using ALT. It was first conducted by NICT and UCSY as described in Ye Kyaw Thu, Win Pa Pa, Masao Utiyama, Andrew Finch and Eiichiro Sumita (2016). Then, it was developed under ASEAN IVO as described in this Web page.\n\n\nThe process of building ALT began with sampling about 20,000 sentences from English Wikinews, and then these sentences were translated into the other languages.### Supported Tasks and Leaderboards\n\n\nMachine Translation, Dependency Parsing### Languages\n\n\nIt supports 13 language:\n\n\n* Bengali\n* English\n* Filipino\n* Hindi\n* Bahasa Indonesia\n* Japanese\n* Khmer\n* Lao\n* Malay\n* Myanmar (Burmese)\n* Thai\n* Vietnamese\n* Chinese (Simplified Chinese).\n\n\nDataset Structure\n-----------------### Data Instances#### ALT Parallel Corpus#### ALT Treebank#### ALT Myanmar transliteration### Data Fields#### ALT Parallel Corpus\n\n\n* SNT.URLID: URL link to the source article listed in URL\n* SNT.URLID.SNTID: index number from 1 to 20000. It is a seletected sentence from 'SNT.URLID'\n\n\nand bg, en, fil, hi, id, ja, khm, lo, ms, my, th, vi, zh correspond to the target language", "passage: #### ALT Treebank\n\n\n* status: it indicates how a sentence is annotated; 'draft' sentences are annotated by one annotater and 'reviewed' sentences are annotated by two annotater\n\n\nThe annotatation is different from language to language, please see their guildlines for more detail.### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale\n\n\nThe ALT project was initiated by the National Institute of Information and Communications Technology, Japan (NICT) in 2014. NICT started to build Japanese and English ALT and worked with the University of Computer Studies, Yangon, Myanmar (UCSY) to build Myanmar ALT in 2014. Then, the Badan Pengkajian dan Penerapan Teknologi, Indonesia (BPPT), the Institute for Infocomm Research, Singapore (I2R), the Institute of Information Technology, Vietnam (IOIT), and the National Institute of Posts, Telecoms and ICT, Cambodia (NIPTICT) joined to make ALT for Indonesian, Malay, Vietnamese, and Khmer in 2015.### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?\n\n\nThe dataset is sampled from the English Wikinews in 2014. These will be annotated with word segmentation, POS tags, and syntax information, in addition to the word alignment information by linguistic experts from\n\n\n* National Institute of Information and Communications Technology, Japan (NICT) for Japanses and English\n* University of Computer Studies, Yangon, Myanmar (UCSY) for Myanmar\n* the Badan Pengkajian dan Penerapan Teknologi, Indonesia (BPPT) for Indonesian\n* the Institute for Infocomm Research, Singapore (I2R) for Malay\n* the Institute of Information Technology, Vietnam (IOIT) for Vietnamese\n* the National Institute of Posts, Telecoms and ICT, Cambodia for Khmer### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators\n\n\n* National Institute of Information and Communications Technology, Japan (NICT) for Japanses and English\n* University of Computer Studies, Yangon, Myanmar (UCSY) for Myanmar\n* the Badan Pengkajian dan Penerapan Teknologi, Indonesia (BPPT) for Indonesian\n* the Institute for Infocomm Research, Singapore (I2R) for Malay\n* the Institute of Information Technology, Vietnam (IOIT) for Vietnamese\n* the National Institute of Posts, Telecoms and ICT, Cambodia for Khmer" ]
9d9c45c18f8c3cf1b23a3c27917b60cbf28f3289
# Dataset Card for Amazon Review Polarity ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://registry.opendata.aws/ - **Repository:** https://github.com/zhangxiangxiao/Crepe - **Paper:** https://arxiv.org/abs/1509.01626 - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Xiang Zhang](mailto:xiang.zhang@nyu.edu) ### Dataset Summary The Amazon reviews dataset consists of reviews from amazon. The data span a period of 18 years, including ~35 million reviews up to March 2013. Reviews include product and user information, ratings, and a plaintext review. ### Supported Tasks and Leaderboards - `text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the content and the title, predict the correct star rating. ### Languages Mainly English. ## Dataset Structure ### Data Instances A typical data point, comprises of a title, a content and the corresponding label. An example from the AmazonPolarity test set looks as follows: ``` { 'title':'Great CD', 'content':"My lovely Pat has one of the GREAT voices of her generation. I have listened to this CD for YEARS and I still LOVE IT. When I'm in a good mood it makes me feel better. A bad mood just evaporates like sugar in the rain. This CD just oozes LIFE. Vocals are jusat STUUNNING and lyrics just kill. One of life's hidden gems. This is a desert isle CD in my book. Why she never made it big is just beyond me. Everytime I play this, no matter black, white, young, old, male, female EVERYBODY says one thing ""Who was that singing ?""", 'label':1 } ``` ### Data Fields - 'title': a string containing the title of the review - escaped using double quotes (") and any internal double quote is escaped by 2 double quotes (""). New lines are escaped by a backslash followed with an "n" character, that is "\n". - 'content': a string containing the body of the document - escaped using double quotes (") and any internal double quote is escaped by 2 double quotes (""). New lines are escaped by a backslash followed with an "n" character, that is "\n". - 'label': either 1 (positive) or 0 (negative) rating. ### Data Splits The Amazon reviews polarity dataset is constructed by taking review score 1 and 2 as negative, and 4 and 5 as positive. Samples of score 3 is ignored. Each class has 1,800,000 training samples and 200,000 testing samples. ## Dataset Creation ### Curation Rationale The Amazon reviews polarity dataset is constructed by Xiang Zhang (xiang.zhang@nyu.edu). It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015). ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information Apache License 2.0 ### Citation Information McAuley, Julian, and Jure Leskovec. "Hidden factors and hidden topics: understanding rating dimensions with review text." In Proceedings of the 7th ACM conference on Recommender systems, pp. 165-172. 2013. Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015) ### Contributions Thanks to [@hfawaz](https://github.com/hfawaz) for adding this dataset.
amazon_polarity
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:apache-2.0", "arxiv:1509.01626", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "Amazon Review Polarity", "dataset_info": {"config_name": "amazon_polarity", "features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "positive"}}}}, {"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1604364432, "num_examples": 3600000}, {"name": "test", "num_bytes": 178176193, "num_examples": 400000}], "download_size": 1145430497, "dataset_size": 1782540625}, "configs": [{"config_name": "amazon_polarity", "data_files": [{"split": "train", "path": "amazon_polarity/train-*"}, {"split": "test", "path": "amazon_polarity/test-*"}], "default": true}], "train-eval-index": [{"config": "amazon_polarity", "task": "text-classification", "task_id": "binary_classification", "splits": {"train_split": "train", "eval_split": "test"}, "col_mapping": {"content": "text", "label": "target"}, "metrics": [{"type": "accuracy", "name": "Accuracy"}, {"type": "f1", "name": "F1 macro", "args": {"average": "macro"}}, {"type": "f1", "name": "F1 micro", "args": {"average": "micro"}}, {"type": "f1", "name": "F1 weighted", "args": {"average": "weighted"}}, {"type": "precision", "name": "Precision macro", "args": {"average": "macro"}}, {"type": "precision", "name": "Precision micro", "args": {"average": "micro"}}, {"type": "precision", "name": "Precision weighted", "args": {"average": "weighted"}}, {"type": "recall", "name": "Recall macro", "args": {"average": "macro"}}, {"type": "recall", "name": "Recall micro", "args": {"average": "micro"}}, {"type": "recall", "name": "Recall weighted", "args": {"average": "weighted"}}]}]}
2024-01-09T12:23:33+00:00
[ "1509.01626" ]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-apache-2.0 #arxiv-1509.01626 #region-us
# Dataset Card for Amazon Review Polarity ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL - Leaderboard: - Point of Contact: Xiang Zhang ### Dataset Summary The Amazon reviews dataset consists of reviews from amazon. The data span a period of 18 years, including ~35 million reviews up to March 2013. Reviews include product and user information, ratings, and a plaintext review. ### Supported Tasks and Leaderboards - 'text-classification', 'sentiment-classification': The dataset is mainly used for text classification: given the content and the title, predict the correct star rating. ### Languages Mainly English. ## Dataset Structure ### Data Instances A typical data point, comprises of a title, a content and the corresponding label. An example from the AmazonPolarity test set looks as follows: ### Data Fields - 'title': a string containing the title of the review - escaped using double quotes (") and any internal double quote is escaped by 2 double quotes (""). New lines are escaped by a backslash followed with an "n" character, that is "\n". - 'content': a string containing the body of the document - escaped using double quotes (") and any internal double quote is escaped by 2 double quotes (""). New lines are escaped by a backslash followed with an "n" character, that is "\n". - 'label': either 1 (positive) or 0 (negative) rating. ### Data Splits The Amazon reviews polarity dataset is constructed by taking review score 1 and 2 as negative, and 4 and 5 as positive. Samples of score 3 is ignored. Each class has 1,800,000 training samples and 200,000 testing samples. ## Dataset Creation ### Curation Rationale The Amazon reviews polarity dataset is constructed by Xiang Zhang (URL@URL). It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015). ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information Apache License 2.0 McAuley, Julian, and Jure Leskovec. "Hidden factors and hidden topics: understanding rating dimensions with review text." In Proceedings of the 7th ACM conference on Recommender systems, pp. 165-172. 2013. Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015) ### Contributions Thanks to @hfawaz for adding this dataset.
[ "# Dataset Card for Amazon Review Polarity", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact: Xiang Zhang", "### Dataset Summary\n\nThe Amazon reviews dataset consists of reviews from amazon.\nThe data span a period of 18 years, including ~35 million reviews up to March 2013.\nReviews include product and user information, ratings, and a plaintext review.", "### Supported Tasks and Leaderboards\n\n- 'text-classification', 'sentiment-classification': The dataset is mainly used for text classification: given the content and the title, predict the correct star rating.", "### Languages\n\nMainly English.", "## Dataset Structure", "### Data Instances\n\nA typical data point, comprises of a title, a content and the corresponding label. \n\nAn example from the AmazonPolarity test set looks as follows:", "### Data Fields\n\n- 'title': a string containing the title of the review - escaped using double quotes (\") and any internal double quote is escaped by 2 double quotes (\"\"). New lines are escaped by a backslash followed with an \"n\" character, that is \"\\n\".\n- 'content': a string containing the body of the document - escaped using double quotes (\") and any internal double quote is escaped by 2 double quotes (\"\"). New lines are escaped by a backslash followed with an \"n\" character, that is \"\\n\".\n- 'label': either 1 (positive) or 0 (negative) rating.", "### Data Splits\n\nThe Amazon reviews polarity dataset is constructed by taking review score 1 and 2 as negative, and 4 and 5 as positive. Samples of score 3 is ignored. Each class has 1,800,000 training samples and 200,000 testing samples.", "## Dataset Creation", "### Curation Rationale\n\nThe Amazon reviews polarity dataset is constructed by Xiang Zhang (URL@URL). It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nApache License 2.0\n\n\n\nMcAuley, Julian, and Jure Leskovec. \"Hidden factors and hidden topics: understanding rating dimensions with review text.\" In Proceedings of the 7th ACM conference on Recommender systems, pp. 165-172. 2013.\n\nXiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015)", "### Contributions\n\nThanks to @hfawaz for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-apache-2.0 #arxiv-1509.01626 #region-us \n", "# Dataset Card for Amazon Review Polarity", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact: Xiang Zhang", "### Dataset Summary\n\nThe Amazon reviews dataset consists of reviews from amazon.\nThe data span a period of 18 years, including ~35 million reviews up to March 2013.\nReviews include product and user information, ratings, and a plaintext review.", "### Supported Tasks and Leaderboards\n\n- 'text-classification', 'sentiment-classification': The dataset is mainly used for text classification: given the content and the title, predict the correct star rating.", "### Languages\n\nMainly English.", "## Dataset Structure", "### Data Instances\n\nA typical data point, comprises of a title, a content and the corresponding label. \n\nAn example from the AmazonPolarity test set looks as follows:", "### Data Fields\n\n- 'title': a string containing the title of the review - escaped using double quotes (\") and any internal double quote is escaped by 2 double quotes (\"\"). New lines are escaped by a backslash followed with an \"n\" character, that is \"\\n\".\n- 'content': a string containing the body of the document - escaped using double quotes (\") and any internal double quote is escaped by 2 double quotes (\"\"). New lines are escaped by a backslash followed with an \"n\" character, that is \"\\n\".\n- 'label': either 1 (positive) or 0 (negative) rating.", "### Data Splits\n\nThe Amazon reviews polarity dataset is constructed by taking review score 1 and 2 as negative, and 4 and 5 as positive. Samples of score 3 is ignored. Each class has 1,800,000 training samples and 200,000 testing samples.", "## Dataset Creation", "### Curation Rationale\n\nThe Amazon reviews polarity dataset is constructed by Xiang Zhang (URL@URL). It is used as a text classification benchmark in the following paper: Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nApache License 2.0\n\n\n\nMcAuley, Julian, and Jure Leskovec. \"Hidden factors and hidden topics: understanding rating dimensions with review text.\" In Proceedings of the 7th ACM conference on Recommender systems, pp. 165-172. 2013.\n\nXiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015)", "### Contributions\n\nThanks to @hfawaz for adding this dataset." ]
[ 100, 9, 120, 31, 54, 51, 8, 6, 40, 148, 59, 5, 89, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 108, 17 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-apache-2.0 #arxiv-1509.01626 #region-us \n# Dataset Card for Amazon Review Polarity## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact: Xiang Zhang### Dataset Summary\n\nThe Amazon reviews dataset consists of reviews from amazon.\nThe data span a period of 18 years, including ~35 million reviews up to March 2013.\nReviews include product and user information, ratings, and a plaintext review.### Supported Tasks and Leaderboards\n\n- 'text-classification', 'sentiment-classification': The dataset is mainly used for text classification: given the content and the title, predict the correct star rating.### Languages\n\nMainly English.## Dataset Structure### Data Instances\n\nA typical data point, comprises of a title, a content and the corresponding label. \n\nAn example from the AmazonPolarity test set looks as follows:" ]
b6115b04af1d02b3c30849bdd4c55899bff0ae63
# Dataset Card for The Multilingual Amazon Reviews Corpus ## Table of Contents - [Dataset Card for amazon_reviews_multi](#dataset-card-for-amazon_reviews_multi) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [plain_text](#plain_text) - [Data Fields](#data-fields) - [plain_text](#plain_text-1) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Webpage:** https://registry.opendata.aws/amazon-reviews-ml/ - **Paper:** https://arxiv.org/abs/2010.02573 - **Point of Contact:** [multilingual-reviews-dataset@amazon.com](mailto:multilingual-reviews-dataset@amazon.com) ### Dataset Summary <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"> <p><b>Defunct:</b> Dataset "amazon_reviews_multi" is defunct and no longer accessible due to the decision of data providers.</p> </div> We provide an Amazon product reviews dataset for multilingual text classification. The dataset contains reviews in English, Japanese, German, French, Chinese and Spanish, collected between November 1, 2015 and November 1, 2019. Each record in the dataset contains the review text, the review title, the star rating, an anonymized reviewer ID, an anonymized product ID and the coarse-grained product category (e.g. ‘books’, ‘appliances’, etc.) The corpus is balanced across stars, so each star rating constitutes 20% of the reviews in each language. For each language, there are 200,000, 5,000 and 5,000 reviews in the training, development and test sets respectively. The maximum number of reviews per reviewer is 20 and the maximum number of reviews per product is 20. All reviews are truncated after 2,000 characters, and all reviews are at least 20 characters long. Note that the language of a review does not necessarily match the language of its marketplace (e.g. reviews from amazon.de are primarily written in German, but could also be written in English, etc.). For this reason, we applied a language detection algorithm based on the work in Bojanowski et al. (2017) to determine the language of the review text and we removed reviews that were not written in the expected language. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset contains reviews in English, Japanese, German, French, Chinese and Spanish. ## Dataset Structure ### Data Instances Each data instance corresponds to a review. The original JSON for an instance looks like so (German example): ```json { "review_id": "de_0784695", "product_id": "product_de_0572654", "reviewer_id": "reviewer_de_0645436", "stars": "1", "review_body": "Leider, leider nach einmal waschen ausgeblichen . Es sieht super h\u00fcbsch aus , nur leider stinkt es ganz schrecklich und ein Waschgang in der Maschine ist notwendig ! Nach einem mal waschen sah es aus als w\u00e4re es 10 Jahre alt und hatte 1000 e von Waschg\u00e4ngen hinter sich :( echt schade !", "review_title": "Leider nicht zu empfehlen", "language": "de", "product_category": "home" } ``` ### Data Fields - `review_id`: A string identifier of the review. - `product_id`: A string identifier of the product being reviewed. - `reviewer_id`: A string identifier of the reviewer. - `stars`: An int between 1-5 indicating the number of stars. - `review_body`: The text body of the review. - `review_title`: The text title of the review. - `language`: The string identifier of the review language. - `product_category`: String representation of the product's category. ### Data Splits Each language configuration comes with its own `train`, `validation`, and `test` splits. The `all_languages` split is simply a concatenation of the corresponding split across all languages. That is, the `train` split for `all_languages` is a concatenation of the `train` splits for each of the languages and likewise for `validation` and `test`. ## Dataset Creation ### Curation Rationale The dataset is motivated by the desire to advance sentiment analysis and text classification in other (non-English) languages. ### Source Data #### Initial Data Collection and Normalization The authors gathered the reviews from the marketplaces in the US, Japan, Germany, France, Spain, and China for the English, Japanese, German, French, Spanish, and Chinese languages, respectively. They then ensured the correct language by applying a language detection algorithm, only retaining those of the target language. In a random sample of the resulting reviews, the authors observed a small percentage of target languages that were incorrectly filtered out and a very few mismatched languages that were incorrectly retained. #### Who are the source language producers? The original text comes from Amazon customers reviewing products on the marketplace across a variety of product categories. ### Annotations #### Annotation process Each of the fields included are submitted by the user with the review or otherwise associated with the review. No manual or machine-driven annotation was necessary. #### Who are the annotators? N/A ### Personal and Sensitive Information According to the original dataset [license terms](https://docs.opendata.aws/amazon-reviews-ml/license.txt), you may not: - link or associate content in the Reviews Corpus with any personal information (including Amazon customer accounts), or - attempt to determine the identity of the author of any content in the Reviews Corpus. If you violate any of the foregoing conditions, your license to access and use the Reviews Corpus will automatically terminate without prejudice to any of the other rights or remedies Amazon may have. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is part of an effort to encourage text classification research in languages other than English. Such work increases the accessibility of natural language technology to more regions and cultures. Unfortunately, each of the languages included here is relatively high resource and well studied. ### Discussion of Biases The dataset contains only reviews from verified purchases (as described in the paper, section 2.1), and the reviews should conform the [Amazon Community Guidelines](https://www.amazon.com/gp/help/customer/display.html?nodeId=GLHXEX85MENUE4XF). ### Other Known Limitations The dataset is constructed so that the distribution of star ratings is balanced. This feature has some advantages for purposes of classification, but some types of language may be over or underrepresented relative to the original distribution of reviews to achieve this balance. ## Additional Information ### Dataset Curators Published by Phillip Keung, Yichao Lu, György Szarvas, and Noah A. Smith. Managed by Amazon. ### Licensing Information Amazon has licensed this dataset under its own agreement for non-commercial research usage only. This licence is quite restrictive preventing use anywhere a fee is received including paid for internships etc. A copy of the agreement can be found at the dataset webpage here: https://docs.opendata.aws/amazon-reviews-ml/license.txt By accessing the Multilingual Amazon Reviews Corpus ("Reviews Corpus"), you agree that the Reviews Corpus is an Amazon Service subject to the [Amazon.com Conditions of Use](https://www.amazon.com/gp/help/customer/display.html/ref=footer_cou?ie=UTF8&nodeId=508088) and you agree to be bound by them, with the following additional conditions: In addition to the license rights granted under the Conditions of Use, Amazon or its content providers grant you a limited, non-exclusive, non-transferable, non-sublicensable, revocable license to access and use the Reviews Corpus for purposes of academic research. You may not resell, republish, or make any commercial use of the Reviews Corpus or its contents, including use of the Reviews Corpus for commercial research, such as research related to a funding or consultancy contract, internship, or other relationship in which the results are provided for a fee or delivered to a for-profit organization. You may not (a) link or associate content in the Reviews Corpus with any personal information (including Amazon customer accounts), or (b) attempt to determine the identity of the author of any content in the Reviews Corpus. If you violate any of the foregoing conditions, your license to access and use the Reviews Corpus will automatically terminate without prejudice to any of the other rights or remedies Amazon may have. ### Citation Information Please cite the following paper (arXiv) if you found this dataset useful: Phillip Keung, Yichao Lu, György Szarvas and Noah A. Smith. “The Multilingual Amazon Reviews Corpus.” In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 2020. ``` @inproceedings{marc_reviews, title={The Multilingual Amazon Reviews Corpus}, author={Keung, Phillip and Lu, Yichao and Szarvas, György and Smith, Noah A.}, booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing}, year={2020} } ``` ### Contributions Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset.
amazon_reviews_multi
[ "task_categories:summarization", "task_categories:text-generation", "task_categories:fill-mask", "task_categories:text-classification", "task_ids:text-scoring", "task_ids:language-modeling", "task_ids:masked-language-modeling", "task_ids:sentiment-classification", "task_ids:sentiment-scoring", "task_ids:topic-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "multilinguality:multilingual", "size_categories:100K<n<1M", "size_categories:1M<n<10M", "source_datasets:original", "language:de", "language:en", "language:es", "language:fr", "language:ja", "language:zh", "license:other", "arxiv:2010.02573", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["de", "en", "es", "fr", "ja", "zh"], "license": ["other"], "multilinguality": ["monolingual", "multilingual"], "size_categories": ["100K<n<1M", "1M<n<10M"], "source_datasets": ["original"], "task_categories": ["summarization", "text-generation", "fill-mask", "text-classification"], "task_ids": ["text-scoring", "language-modeling", "masked-language-modeling", "sentiment-classification", "sentiment-scoring", "topic-classification"], "pretty_name": "The Multilingual Amazon Reviews Corpus", "config_names": ["all_languages", "de", "en", "es", "fr", "ja", "zh"], "dataset_info": [{"config_name": "all_languages", "features": [{"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "reviewer_id", "dtype": "string"}, {"name": "stars", "dtype": "int32"}, {"name": "review_body", "dtype": "string"}, {"name": "review_title", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "product_category", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 364405048, "num_examples": 1200000}, {"name": "validation", "num_bytes": 9047533, "num_examples": 30000}, {"name": "test", "num_bytes": 9099141, "num_examples": 30000}], "download_size": 640320386, "dataset_size": 382551722}, {"config_name": "de", "features": [{"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "reviewer_id", "dtype": "string"}, {"name": "stars", "dtype": "int32"}, {"name": "review_body", "dtype": "string"}, {"name": "review_title", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "product_category", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 64485678, "num_examples": 200000}, {"name": "validation", "num_bytes": 1605727, "num_examples": 5000}, {"name": "test", "num_bytes": 1611044, "num_examples": 5000}], "download_size": 94802490, "dataset_size": 67702449}, {"config_name": "en", "features": [{"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "reviewer_id", "dtype": "string"}, {"name": "stars", "dtype": "int32"}, {"name": "review_body", "dtype": "string"}, {"name": "review_title", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "product_category", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 58601089, "num_examples": 200000}, {"name": "validation", "num_bytes": 1474672, "num_examples": 5000}, {"name": "test", "num_bytes": 1460565, "num_examples": 5000}], "download_size": 86094112, "dataset_size": 61536326}, {"config_name": "es", "features": [{"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "reviewer_id", "dtype": "string"}, {"name": "stars", "dtype": "int32"}, {"name": "review_body", "dtype": "string"}, {"name": "review_title", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "product_category", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 52375658, "num_examples": 200000}, {"name": "validation", "num_bytes": 1303958, "num_examples": 5000}, {"name": "test", "num_bytes": 1312347, "num_examples": 5000}], "download_size": 81345461, "dataset_size": 54991963}, {"config_name": "fr", "features": [{"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "reviewer_id", "dtype": "string"}, {"name": "stars", "dtype": "int32"}, {"name": "review_body", "dtype": "string"}, {"name": "review_title", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "product_category", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 54593565, "num_examples": 200000}, {"name": "validation", "num_bytes": 1340763, "num_examples": 5000}, {"name": "test", "num_bytes": 1364510, "num_examples": 5000}], "download_size": 85917293, "dataset_size": 57298838}, {"config_name": "ja", "features": [{"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "reviewer_id", "dtype": "string"}, {"name": "stars", "dtype": "int32"}, {"name": "review_body", "dtype": "string"}, {"name": "review_title", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "product_category", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 82401390, "num_examples": 200000}, {"name": "validation", "num_bytes": 2035391, "num_examples": 5000}, {"name": "test", "num_bytes": 2048048, "num_examples": 5000}], "download_size": 177773783, "dataset_size": 86484829}, {"config_name": "zh", "features": [{"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "reviewer_id", "dtype": "string"}, {"name": "stars", "dtype": "int32"}, {"name": "review_body", "dtype": "string"}, {"name": "review_title", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "product_category", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 51947668, "num_examples": 200000}, {"name": "validation", "num_bytes": 1287106, "num_examples": 5000}, {"name": "test", "num_bytes": 1302711, "num_examples": 5000}], "download_size": 114387247, "dataset_size": 54537485}], "viewer": false}
2023-11-02T14:52:21+00:00
[ "2010.02573" ]
[ "de", "en", "es", "fr", "ja", "zh" ]
TAGS #task_categories-summarization #task_categories-text-generation #task_categories-fill-mask #task_categories-text-classification #task_ids-text-scoring #task_ids-language-modeling #task_ids-masked-language-modeling #task_ids-sentiment-classification #task_ids-sentiment-scoring #task_ids-topic-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #multilinguality-multilingual #size_categories-100K<n<1M #size_categories-1M<n<10M #source_datasets-original #language-German #language-English #language-Spanish #language-French #language-Japanese #language-Chinese #license-other #arxiv-2010.02573 #region-us
# Dataset Card for The Multilingual Amazon Reviews Corpus ## Table of Contents - Dataset Card for amazon_reviews_multi - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - plain_text - Data Fields - plain_text - Data Splits - Dataset Creation - Curation Rationale - Source Data - Initial Data Collection and Normalization - Who are the source language producers? - Annotations - Annotation process - Who are the annotators? - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Webpage: URL - Paper: URL - Point of Contact: multilingual-reviews-dataset@URL ### Dataset Summary <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"> <p><b>Defunct:</b> Dataset "amazon_reviews_multi" is defunct and no longer accessible due to the decision of data providers.</p> </div> We provide an Amazon product reviews dataset for multilingual text classification. The dataset contains reviews in English, Japanese, German, French, Chinese and Spanish, collected between November 1, 2015 and November 1, 2019. Each record in the dataset contains the review text, the review title, the star rating, an anonymized reviewer ID, an anonymized product ID and the coarse-grained product category (e.g. ‘books’, ‘appliances’, etc.) The corpus is balanced across stars, so each star rating constitutes 20% of the reviews in each language. For each language, there are 200,000, 5,000 and 5,000 reviews in the training, development and test sets respectively. The maximum number of reviews per reviewer is 20 and the maximum number of reviews per product is 20. All reviews are truncated after 2,000 characters, and all reviews are at least 20 characters long. Note that the language of a review does not necessarily match the language of its marketplace (e.g. reviews from URL are primarily written in German, but could also be written in English, etc.). For this reason, we applied a language detection algorithm based on the work in Bojanowski et al. (2017) to determine the language of the review text and we removed reviews that were not written in the expected language. ### Supported Tasks and Leaderboards ### Languages The dataset contains reviews in English, Japanese, German, French, Chinese and Spanish. ## Dataset Structure ### Data Instances Each data instance corresponds to a review. The original JSON for an instance looks like so (German example): ### Data Fields - 'review_id': A string identifier of the review. - 'product_id': A string identifier of the product being reviewed. - 'reviewer_id': A string identifier of the reviewer. - 'stars': An int between 1-5 indicating the number of stars. - 'review_body': The text body of the review. - 'review_title': The text title of the review. - 'language': The string identifier of the review language. - 'product_category': String representation of the product's category. ### Data Splits Each language configuration comes with its own 'train', 'validation', and 'test' splits. The 'all_languages' split is simply a concatenation of the corresponding split across all languages. That is, the 'train' split for 'all_languages' is a concatenation of the 'train' splits for each of the languages and likewise for 'validation' and 'test'. ## Dataset Creation ### Curation Rationale The dataset is motivated by the desire to advance sentiment analysis and text classification in other (non-English) languages. ### Source Data #### Initial Data Collection and Normalization The authors gathered the reviews from the marketplaces in the US, Japan, Germany, France, Spain, and China for the English, Japanese, German, French, Spanish, and Chinese languages, respectively. They then ensured the correct language by applying a language detection algorithm, only retaining those of the target language. In a random sample of the resulting reviews, the authors observed a small percentage of target languages that were incorrectly filtered out and a very few mismatched languages that were incorrectly retained. #### Who are the source language producers? The original text comes from Amazon customers reviewing products on the marketplace across a variety of product categories. ### Annotations #### Annotation process Each of the fields included are submitted by the user with the review or otherwise associated with the review. No manual or machine-driven annotation was necessary. #### Who are the annotators? N/A ### Personal and Sensitive Information According to the original dataset license terms, you may not: - link or associate content in the Reviews Corpus with any personal information (including Amazon customer accounts), or - attempt to determine the identity of the author of any content in the Reviews Corpus. If you violate any of the foregoing conditions, your license to access and use the Reviews Corpus will automatically terminate without prejudice to any of the other rights or remedies Amazon may have. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is part of an effort to encourage text classification research in languages other than English. Such work increases the accessibility of natural language technology to more regions and cultures. Unfortunately, each of the languages included here is relatively high resource and well studied. ### Discussion of Biases The dataset contains only reviews from verified purchases (as described in the paper, section 2.1), and the reviews should conform the Amazon Community Guidelines. ### Other Known Limitations The dataset is constructed so that the distribution of star ratings is balanced. This feature has some advantages for purposes of classification, but some types of language may be over or underrepresented relative to the original distribution of reviews to achieve this balance. ## Additional Information ### Dataset Curators Published by Phillip Keung, Yichao Lu, György Szarvas, and Noah A. Smith. Managed by Amazon. ### Licensing Information Amazon has licensed this dataset under its own agreement for non-commercial research usage only. This licence is quite restrictive preventing use anywhere a fee is received including paid for internships etc. A copy of the agreement can be found at the dataset webpage here: URL By accessing the Multilingual Amazon Reviews Corpus ("Reviews Corpus"), you agree that the Reviews Corpus is an Amazon Service subject to the URL Conditions of Use and you agree to be bound by them, with the following additional conditions: In addition to the license rights granted under the Conditions of Use, Amazon or its content providers grant you a limited, non-exclusive, non-transferable, non-sublicensable, revocable license to access and use the Reviews Corpus for purposes of academic research. You may not resell, republish, or make any commercial use of the Reviews Corpus or its contents, including use of the Reviews Corpus for commercial research, such as research related to a funding or consultancy contract, internship, or other relationship in which the results are provided for a fee or delivered to a for-profit organization. You may not (a) link or associate content in the Reviews Corpus with any personal information (including Amazon customer accounts), or (b) attempt to determine the identity of the author of any content in the Reviews Corpus. If you violate any of the foregoing conditions, your license to access and use the Reviews Corpus will automatically terminate without prejudice to any of the other rights or remedies Amazon may have. Please cite the following paper (arXiv) if you found this dataset useful: Phillip Keung, Yichao Lu, György Szarvas and Noah A. Smith. “The Multilingual Amazon Reviews Corpus.” In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 2020. ### Contributions Thanks to @joeddav for adding this dataset.
[ "# Dataset Card for The Multilingual Amazon Reviews Corpus", "## Table of Contents\n- Dataset Card for amazon_reviews_multi\n - Table of Contents\n - Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n - Dataset Structure\n - Data Instances\n - plain_text\n - Data Fields\n - plain_text\n - Data Splits\n - Dataset Creation\n - Curation Rationale\n - Source Data\n - Initial Data Collection and Normalization\n - Who are the source language producers?\n - Annotations\n - Annotation process\n - Who are the annotators?\n - Personal and Sensitive Information\n - Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n - Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Webpage: URL\n- Paper: URL\n- Point of Contact: multilingual-reviews-dataset@URL", "### Dataset Summary\n\n<div class=\"course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400\">\n <p><b>Defunct:</b> Dataset \"amazon_reviews_multi\" is defunct and no longer accessible due to the decision of data providers.</p>\n</div>\n\nWe provide an Amazon product reviews dataset for multilingual text classification. The dataset contains reviews in English, Japanese, German, French, Chinese and Spanish, collected between November 1, 2015 and November 1, 2019. Each record in the dataset contains the review text, the review title, the star rating, an anonymized reviewer ID, an anonymized product ID and the coarse-grained product category (e.g. ‘books’, ‘appliances’, etc.) The corpus is balanced across stars, so each star rating constitutes 20% of the reviews in each language.\n\nFor each language, there are 200,000, 5,000 and 5,000 reviews in the training, development and test sets respectively. The maximum number of reviews per reviewer is 20 and the maximum number of reviews per product is 20. All reviews are truncated after 2,000 characters, and all reviews are at least 20 characters long.\n\nNote that the language of a review does not necessarily match the language of its marketplace (e.g. reviews from URL are primarily written in German, but could also be written in English, etc.). For this reason, we applied a language detection algorithm based on the work in Bojanowski et al. (2017) to determine the language of the review text and we removed reviews that were not written in the expected language.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe dataset contains reviews in English, Japanese, German, French, Chinese and Spanish.", "## Dataset Structure", "### Data Instances\n\nEach data instance corresponds to a review. The original JSON for an instance looks like so (German example):", "### Data Fields\n\n- 'review_id': A string identifier of the review.\n- 'product_id': A string identifier of the product being reviewed.\n- 'reviewer_id': A string identifier of the reviewer.\n- 'stars': An int between 1-5 indicating the number of stars.\n- 'review_body': The text body of the review.\n- 'review_title': The text title of the review.\n- 'language': The string identifier of the review language.\n- 'product_category': String representation of the product's category.", "### Data Splits\n\nEach language configuration comes with its own 'train', 'validation', and 'test' splits. The 'all_languages' split\nis simply a concatenation of the corresponding split across all languages. That is, the 'train' split for\n'all_languages' is a concatenation of the 'train' splits for each of the languages and likewise for 'validation' and\n'test'.", "## Dataset Creation", "### Curation Rationale\n\nThe dataset is motivated by the desire to advance sentiment analysis and text classification in other (non-English)\nlanguages.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe authors gathered the reviews from the marketplaces in the US, Japan, Germany, France, Spain, and China for the\nEnglish, Japanese, German, French, Spanish, and Chinese languages, respectively. They then ensured the correct\nlanguage by applying a language detection algorithm, only retaining those of the target language. In a random sample\nof the resulting reviews, the authors observed a small percentage of target languages that were incorrectly filtered\nout and a very few mismatched languages that were incorrectly retained.", "#### Who are the source language producers?\n\nThe original text comes from Amazon customers reviewing products on the marketplace across a variety of product\ncategories.", "### Annotations", "#### Annotation process\n\nEach of the fields included are submitted by the user with the review or otherwise associated with the review. No\nmanual or machine-driven annotation was necessary.", "#### Who are the annotators?\n\nN/A", "### Personal and Sensitive Information\n\nAccording to the original dataset license terms, you may not:\n- link or associate content in the Reviews Corpus with any personal information (including Amazon customer accounts), or \n- attempt to determine the identity of the author of any content in the Reviews Corpus.\n\nIf you violate any of the foregoing conditions, your license to access and use the Reviews Corpus will automatically\nterminate without prejudice to any of the other rights or remedies Amazon may have.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is part of an effort to encourage text classification research in languages other than English. Such\nwork increases the accessibility of natural language technology to more regions and cultures. Unfortunately, each of\nthe languages included here is relatively high resource and well studied.", "### Discussion of Biases\n\nThe dataset contains only reviews from verified purchases (as described in the paper, section 2.1), and the reviews\nshould conform the Amazon Community Guidelines.", "### Other Known Limitations\n\nThe dataset is constructed so that the distribution of star ratings is balanced. This feature has some advantages for\npurposes of classification, but some types of language may be over or underrepresented relative to the original\ndistribution of reviews to achieve this balance.", "## Additional Information", "### Dataset Curators\n\nPublished by Phillip Keung, Yichao Lu, György Szarvas, and Noah A. Smith. Managed by Amazon.", "### Licensing Information\n\nAmazon has licensed this dataset under its own agreement for non-commercial research usage only. This licence is quite restrictive preventing use anywhere a fee is received including paid for internships etc. A copy of the agreement can be found at the dataset webpage here:\nURL\n\nBy accessing the Multilingual Amazon Reviews Corpus (\"Reviews Corpus\"), you agree that the Reviews Corpus is an Amazon Service subject to the URL Conditions of Use and you agree to be bound by them, with the following additional conditions:\n\nIn addition to the license rights granted under the Conditions of Use, Amazon or its content providers grant you a limited, non-exclusive, non-transferable, non-sublicensable, revocable license to access and use the Reviews Corpus for purposes of academic research. You may not resell, republish, or make any commercial use of the Reviews Corpus or its contents, including use of the Reviews Corpus for commercial research, such as research related to a funding or consultancy contract, internship, or other relationship in which the results are provided for a fee or delivered to a for-profit organization. You may not (a) link or associate content in the Reviews Corpus with any personal information (including Amazon customer accounts), or (b) attempt to determine the identity of the author of any content in the Reviews Corpus. If you violate any of the foregoing conditions, your license to access and use the Reviews Corpus will automatically terminate without prejudice to any of the other rights or remedies Amazon may have.\n\n\n\nPlease cite the following paper (arXiv) if you found this dataset useful:\n\nPhillip Keung, Yichao Lu, György Szarvas and Noah A. Smith. “The Multilingual Amazon Reviews Corpus.” In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 2020.", "### Contributions\n\nThanks to @joeddav for adding this dataset." ]
[ "TAGS\n#task_categories-summarization #task_categories-text-generation #task_categories-fill-mask #task_categories-text-classification #task_ids-text-scoring #task_ids-language-modeling #task_ids-masked-language-modeling #task_ids-sentiment-classification #task_ids-sentiment-scoring #task_ids-topic-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #multilinguality-multilingual #size_categories-100K<n<1M #size_categories-1M<n<10M #source_datasets-original #language-German #language-English #language-Spanish #language-French #language-Japanese #language-Chinese #license-other #arxiv-2010.02573 #region-us \n", "# Dataset Card for The Multilingual Amazon Reviews Corpus", "## Table of Contents\n- Dataset Card for amazon_reviews_multi\n - Table of Contents\n - Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n - Dataset Structure\n - Data Instances\n - plain_text\n - Data Fields\n - plain_text\n - Data Splits\n - Dataset Creation\n - Curation Rationale\n - Source Data\n - Initial Data Collection and Normalization\n - Who are the source language producers?\n - Annotations\n - Annotation process\n - Who are the annotators?\n - Personal and Sensitive Information\n - Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n - Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Webpage: URL\n- Paper: URL\n- Point of Contact: multilingual-reviews-dataset@URL", "### Dataset Summary\n\n<div class=\"course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400\">\n <p><b>Defunct:</b> Dataset \"amazon_reviews_multi\" is defunct and no longer accessible due to the decision of data providers.</p>\n</div>\n\nWe provide an Amazon product reviews dataset for multilingual text classification. The dataset contains reviews in English, Japanese, German, French, Chinese and Spanish, collected between November 1, 2015 and November 1, 2019. Each record in the dataset contains the review text, the review title, the star rating, an anonymized reviewer ID, an anonymized product ID and the coarse-grained product category (e.g. ‘books’, ‘appliances’, etc.) The corpus is balanced across stars, so each star rating constitutes 20% of the reviews in each language.\n\nFor each language, there are 200,000, 5,000 and 5,000 reviews in the training, development and test sets respectively. The maximum number of reviews per reviewer is 20 and the maximum number of reviews per product is 20. All reviews are truncated after 2,000 characters, and all reviews are at least 20 characters long.\n\nNote that the language of a review does not necessarily match the language of its marketplace (e.g. reviews from URL are primarily written in German, but could also be written in English, etc.). For this reason, we applied a language detection algorithm based on the work in Bojanowski et al. (2017) to determine the language of the review text and we removed reviews that were not written in the expected language.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe dataset contains reviews in English, Japanese, German, French, Chinese and Spanish.", "## Dataset Structure", "### Data Instances\n\nEach data instance corresponds to a review. The original JSON for an instance looks like so (German example):", "### Data Fields\n\n- 'review_id': A string identifier of the review.\n- 'product_id': A string identifier of the product being reviewed.\n- 'reviewer_id': A string identifier of the reviewer.\n- 'stars': An int between 1-5 indicating the number of stars.\n- 'review_body': The text body of the review.\n- 'review_title': The text title of the review.\n- 'language': The string identifier of the review language.\n- 'product_category': String representation of the product's category.", "### Data Splits\n\nEach language configuration comes with its own 'train', 'validation', and 'test' splits. The 'all_languages' split\nis simply a concatenation of the corresponding split across all languages. That is, the 'train' split for\n'all_languages' is a concatenation of the 'train' splits for each of the languages and likewise for 'validation' and\n'test'.", "## Dataset Creation", "### Curation Rationale\n\nThe dataset is motivated by the desire to advance sentiment analysis and text classification in other (non-English)\nlanguages.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe authors gathered the reviews from the marketplaces in the US, Japan, Germany, France, Spain, and China for the\nEnglish, Japanese, German, French, Spanish, and Chinese languages, respectively. They then ensured the correct\nlanguage by applying a language detection algorithm, only retaining those of the target language. In a random sample\nof the resulting reviews, the authors observed a small percentage of target languages that were incorrectly filtered\nout and a very few mismatched languages that were incorrectly retained.", "#### Who are the source language producers?\n\nThe original text comes from Amazon customers reviewing products on the marketplace across a variety of product\ncategories.", "### Annotations", "#### Annotation process\n\nEach of the fields included are submitted by the user with the review or otherwise associated with the review. No\nmanual or machine-driven annotation was necessary.", "#### Who are the annotators?\n\nN/A", "### Personal and Sensitive Information\n\nAccording to the original dataset license terms, you may not:\n- link or associate content in the Reviews Corpus with any personal information (including Amazon customer accounts), or \n- attempt to determine the identity of the author of any content in the Reviews Corpus.\n\nIf you violate any of the foregoing conditions, your license to access and use the Reviews Corpus will automatically\nterminate without prejudice to any of the other rights or remedies Amazon may have.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is part of an effort to encourage text classification research in languages other than English. Such\nwork increases the accessibility of natural language technology to more regions and cultures. Unfortunately, each of\nthe languages included here is relatively high resource and well studied.", "### Discussion of Biases\n\nThe dataset contains only reviews from verified purchases (as described in the paper, section 2.1), and the reviews\nshould conform the Amazon Community Guidelines.", "### Other Known Limitations\n\nThe dataset is constructed so that the distribution of star ratings is balanced. This feature has some advantages for\npurposes of classification, but some types of language may be over or underrepresented relative to the original\ndistribution of reviews to achieve this balance.", "## Additional Information", "### Dataset Curators\n\nPublished by Phillip Keung, Yichao Lu, György Szarvas, and Noah A. Smith. Managed by Amazon.", "### Licensing Information\n\nAmazon has licensed this dataset under its own agreement for non-commercial research usage only. This licence is quite restrictive preventing use anywhere a fee is received including paid for internships etc. A copy of the agreement can be found at the dataset webpage here:\nURL\n\nBy accessing the Multilingual Amazon Reviews Corpus (\"Reviews Corpus\"), you agree that the Reviews Corpus is an Amazon Service subject to the URL Conditions of Use and you agree to be bound by them, with the following additional conditions:\n\nIn addition to the license rights granted under the Conditions of Use, Amazon or its content providers grant you a limited, non-exclusive, non-transferable, non-sublicensable, revocable license to access and use the Reviews Corpus for purposes of academic research. You may not resell, republish, or make any commercial use of the Reviews Corpus or its contents, including use of the Reviews Corpus for commercial research, such as research related to a funding or consultancy contract, internship, or other relationship in which the results are provided for a fee or delivered to a for-profit organization. You may not (a) link or associate content in the Reviews Corpus with any personal information (including Amazon customer accounts), or (b) attempt to determine the identity of the author of any content in the Reviews Corpus. If you violate any of the foregoing conditions, your license to access and use the Reviews Corpus will automatically terminate without prejudice to any of the other rights or remedies Amazon may have.\n\n\n\nPlease cite the following paper (arXiv) if you found this dataset useful:\n\nPhillip Keung, Yichao Lu, György Szarvas and Noah A. Smith. “The Multilingual Amazon Reviews Corpus.” In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 2020.", "### Contributions\n\nThanks to @joeddav for adding this dataset." ]
[ 223, 13, 175, 29, 433, 10, 23, 6, 29, 141, 105, 5, 34, 4, 126, 31, 5, 38, 12, 105, 8, 67, 41, 63, 5, 34, 412, 17 ]
[ "passage: TAGS\n#task_categories-summarization #task_categories-text-generation #task_categories-fill-mask #task_categories-text-classification #task_ids-text-scoring #task_ids-language-modeling #task_ids-masked-language-modeling #task_ids-sentiment-classification #task_ids-sentiment-scoring #task_ids-topic-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #multilinguality-multilingual #size_categories-100K<n<1M #size_categories-1M<n<10M #source_datasets-original #language-German #language-English #language-Spanish #language-French #language-Japanese #language-Chinese #license-other #arxiv-2010.02573 #region-us \n# Dataset Card for The Multilingual Amazon Reviews Corpus## Table of Contents\n- Dataset Card for amazon_reviews_multi\n - Table of Contents\n - Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n - Dataset Structure\n - Data Instances\n - plain_text\n - Data Fields\n - plain_text\n - Data Splits\n - Dataset Creation\n - Curation Rationale\n - Source Data\n - Initial Data Collection and Normalization\n - Who are the source language producers?\n - Annotations\n - Annotation process\n - Who are the annotators?\n - Personal and Sensitive Information\n - Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n - Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Webpage: URL\n- Paper: URL\n- Point of Contact: multilingual-reviews-dataset@URL", "passage: ### Dataset Summary\n\n<div class=\"course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400\">\n <p><b>Defunct:</b> Dataset \"amazon_reviews_multi\" is defunct and no longer accessible due to the decision of data providers.</p>\n</div>\n\nWe provide an Amazon product reviews dataset for multilingual text classification. The dataset contains reviews in English, Japanese, German, French, Chinese and Spanish, collected between November 1, 2015 and November 1, 2019. Each record in the dataset contains the review text, the review title, the star rating, an anonymized reviewer ID, an anonymized product ID and the coarse-grained product category (e.g. ‘books’, ‘appliances’, etc.) The corpus is balanced across stars, so each star rating constitutes 20% of the reviews in each language.\n\nFor each language, there are 200,000, 5,000 and 5,000 reviews in the training, development and test sets respectively. The maximum number of reviews per reviewer is 20 and the maximum number of reviews per product is 20. All reviews are truncated after 2,000 characters, and all reviews are at least 20 characters long.\n\nNote that the language of a review does not necessarily match the language of its marketplace (e.g. reviews from URL are primarily written in German, but could also be written in English, etc.). For this reason, we applied a language detection algorithm based on the work in Bojanowski et al. (2017) to determine the language of the review text and we removed reviews that were not written in the expected language.### Supported Tasks and Leaderboards### Languages\n\nThe dataset contains reviews in English, Japanese, German, French, Chinese and Spanish.## Dataset Structure### Data Instances\n\nEach data instance corresponds to a review. The original JSON for an instance looks like so (German example):### Data Fields\n\n- 'review_id': A string identifier of the review.\n- 'product_id': A string identifier of the product being reviewed.\n- 'reviewer_id': A string identifier of the reviewer.\n- 'stars': An int between 1-5 indicating the number of stars.\n- 'review_body': The text body of the review.\n- 'review_title': The text title of the review.\n- 'language': The string identifier of the review language.\n- 'product_category': String representation of the product's category.### Data Splits\n\nEach language configuration comes with its own 'train', 'validation', and 'test' splits. The 'all_languages' split\nis simply a concatenation of the corresponding split across all languages. That is, the 'train' split for\n'all_languages' is a concatenation of the 'train' splits for each of the languages and likewise for 'validation' and\n'test'.## Dataset Creation### Curation Rationale\n\nThe dataset is motivated by the desire to advance sentiment analysis and text classification in other (non-English)\nlanguages.### Source Data#### Initial Data Collection and Normalization\n\nThe authors gathered the reviews from the marketplaces in the US, Japan, Germany, France, Spain, and China for the\nEnglish, Japanese, German, French, Spanish, and Chinese languages, respectively. They then ensured the correct\nlanguage by applying a language detection algorithm, only retaining those of the target language. In a random sample\nof the resulting reviews, the authors observed a small percentage of target languages that were incorrectly filtered\nout and a very few mismatched languages that were incorrectly retained.", "passage: #### Who are the source language producers?\n\nThe original text comes from Amazon customers reviewing products on the marketplace across a variety of product\ncategories.### Annotations#### Annotation process\n\nEach of the fields included are submitted by the user with the review or otherwise associated with the review. No\nmanual or machine-driven annotation was necessary.#### Who are the annotators?\n\nN/A### Personal and Sensitive Information\n\nAccording to the original dataset license terms, you may not:\n- link or associate content in the Reviews Corpus with any personal information (including Amazon customer accounts), or \n- attempt to determine the identity of the author of any content in the Reviews Corpus.\n\nIf you violate any of the foregoing conditions, your license to access and use the Reviews Corpus will automatically\nterminate without prejudice to any of the other rights or remedies Amazon may have.## Considerations for Using the Data### Social Impact of Dataset\n\nThis dataset is part of an effort to encourage text classification research in languages other than English. Such\nwork increases the accessibility of natural language technology to more regions and cultures. Unfortunately, each of\nthe languages included here is relatively high resource and well studied.### Discussion of Biases\n\nThe dataset contains only reviews from verified purchases (as described in the paper, section 2.1), and the reviews\nshould conform the Amazon Community Guidelines.### Other Known Limitations\n\nThe dataset is constructed so that the distribution of star ratings is balanced. This feature has some advantages for\npurposes of classification, but some types of language may be over or underrepresented relative to the original\ndistribution of reviews to achieve this balance.## Additional Information### Dataset Curators\n\nPublished by Phillip Keung, Yichao Lu, György Szarvas, and Noah A. Smith. Managed by Amazon." ]
e1bfd57e2da5dc7dc4c748eb4a4a112c71e85162
# Dataset Card for "amazon_us_reviews" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://s3.amazonaws.com/amazon-reviews-pds/readme.html](https://s3.amazonaws.com/amazon-reviews-pds/readme.html) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 32377.29 MB - **Size of the generated dataset:** 82820.19 MB - **Total amount of disk used:** 115197.49 MB ### Dataset Summary <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"> <p><b>Defunct:</b> Dataset "amazon_us_reviews" is defunct and no longer accessible due to the decision of data providers.</p> </div> Amazon Customer Reviews (a.k.a. Product Reviews) is one of Amazons iconic products. In a period of over two decades since the first review in 1995, millions of Amazon customers have contributed over a hundred million reviews to express opinions and describe their experiences regarding products on the Amazon.com website. This makes Amazon Customer Reviews a rich source of information for academic researchers in the fields of Natural Language Processing (NLP), Information Retrieval (IR), and Machine Learning (ML), amongst others. Accordingly, we are releasing this data to further research in multiple disciplines related to understanding customer product experiences. Specifically, this dataset was constructed to represent a sample of customer evaluations and opinions, variation in the perception of a product across geographical regions, and promotional intent or bias in reviews. Over 130+ million customer reviews are available to researchers as part of this release. The data is available in TSV files in the amazon-reviews-pds S3 bucket in AWS US East Region. Each line in the data files corresponds to an individual review (tab delimited, with no quote and escape characters). Each Dataset contains the following columns : marketplace - 2 letter country code of the marketplace where the review was written. customer_id - Random identifier that can be used to aggregate reviews written by a single author. review_id - The unique ID of the review. product_id - The unique Product ID the review pertains to. In the multilingual dataset the reviews for the same product in different countries can be grouped by the same product_id. product_parent - Random identifier that can be used to aggregate reviews for the same product. product_title - Title of the product. product_category - Broad product category that can be used to group reviews (also used to group the dataset into coherent parts). star_rating - The 1-5 star rating of the review. helpful_votes - Number of helpful votes. total_votes - Number of total votes the review received. vine - Review was written as part of the Vine program. verified_purchase - The review is on a verified purchase. review_headline - The title of the review. review_body - The review text. review_date - The date the review was written. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### Apparel_v1_00 - **Size of downloaded dataset files:** 648.64 MB - **Size of the generated dataset:** 2254.36 MB - **Total amount of disk used:** 2903.00 MB An example of 'train' looks as follows. ``` { "customer_id": "45223824", "helpful_votes": 0, "marketplace": "US", "product_category": "Apparel", "product_id": "B016PUU3VO", "product_parent": "893588059", "product_title": "Fruit of the Loom Boys' A-Shirt (Pack of 4)", "review_body": "I ordered the same size as I ordered last time, and these shirts were much larger than the previous order. They were also about 6 inches longer. It was like they sent men's shirts instead of boys' shirts. I'll be returning these...", "review_date": "2015-01-01", "review_headline": "Sizes not correct, too big overall and WAY too long", "review_id": "R1N3Z13931J3O9", "star_rating": 2, "total_votes": 0, "verified_purchase": 1, "vine": 0 } ``` #### Automotive_v1_00 - **Size of downloaded dataset files:** 582.15 MB - **Size of the generated dataset:** 1518.88 MB - **Total amount of disk used:** 2101.03 MB An example of 'train' looks as follows. ``` { "customer_id": "16825098", "helpful_votes": 0, "marketplace": "US", "product_category": "Automotive", "product_id": "B000E4PCGE", "product_parent": "694793259", "product_title": "00-03 NISSAN SENTRA MIRROR RH (PASSENGER SIDE), Power, Non-Heated (2000 00 2001 01 2002 02 2003 03) NS35ER 963015M000", "review_body": "Product was as described, new and a great look. Only bad thing is that one of the screws was stripped so I couldn't tighten all three.", "review_date": "2015-08-31", "review_headline": "new and a great look. Only bad thing is that one of ...", "review_id": "R2RUIDUMDKG7P", "star_rating": 3, "total_votes": 0, "verified_purchase": 1, "vine": 0 } ``` #### Baby_v1_00 - **Size of downloaded dataset files:** 357.40 MB - **Size of the generated dataset:** 956.30 MB - **Total amount of disk used:** 1313.70 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "customer_id": "23299101", "helpful_votes": 2, "marketplace": "US", "product_category": "Baby", "product_id": "B00SN6F9NG", "product_parent": "3470998", "product_title": "Rhoost Nail Clipper for Baby - Ergonomically Designed and Easy to Use Baby Nail Clipper, Natural Wooden Bamboo - Baby Health and Personal Care Kits", "review_body": "\"This is an absolute MUST item to have! I was scared to death to clip my baby's nails. I tried other baby nail clippers and th...", "review_date": "2015-08-31", "review_headline": "If fits so comfortably in my hand and I feel like I have ...", "review_id": "R2DRL5NRODVQ3Z", "star_rating": 5, "total_votes": 2, "verified_purchase": 1, "vine": 0 } ``` #### Beauty_v1_00 - **Size of downloaded dataset files:** 914.08 MB - **Size of the generated dataset:** 2397.39 MB - **Total amount of disk used:** 3311.47 MB An example of 'train' looks as follows. ``` { "customer_id": "24655453", "helpful_votes": 1, "marketplace": "US", "product_category": "Beauty", "product_id": "B00SAQ9DZY", "product_parent": "292127037", "product_title": "12 New, High Quality, Amber 2 ml (5/8 Dram) Glass Bottles, with Orifice Reducer and Black Cap.", "review_body": "These are great for small mixtures for EO's, especially for traveling. I only gave this 4 stars because of the orifice reducer. The hole is so small it is hard to get the oil out. Just needs to be slightly bigger.", "review_date": "2015-08-31", "review_headline": "Good Product", "review_id": "R2A30ALEGLMCGN", "star_rating": 4, "total_votes": 1, "verified_purchase": 1, "vine": 0 } ``` #### Books_v1_00 - **Size of downloaded dataset files:** 2740.34 MB - **Size of the generated dataset:** 7193.86 MB - **Total amount of disk used:** 9934.20 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "customer_id": "49735028", "helpful_votes": 0, "marketplace": "US", "product_category": "Books", "product_id": "0664254969", "product_parent": "248307276", "product_title": "Presbyterian Creeds: A Guide to the Book of Confessions", "review_body": "\"The Presbyterian Book of Confessions contains multiple Creeds for use by the denomination. This guidebook helps he lay person t...", "review_date": "2015-08-31", "review_headline": "The Presbyterian Book of Confessions contains multiple Creeds for use ...", "review_id": "R2G519UREHRO8M", "star_rating": 3, "total_votes": 1, "verified_purchase": 1, "vine": 0 } ``` ### Data Fields The data fields are the same among all splits. #### Apparel_v1_00 - `marketplace`: a `string` feature. - `customer_id`: a `string` feature. - `review_id`: a `string` feature. - `product_id`: a `string` feature. - `product_parent`: a `string` feature. - `product_title`: a `string` feature. - `product_category`: a `string` feature. - `star_rating`: a `int32` feature. - `helpful_votes`: a `int32` feature. - `total_votes`: a `int32` feature. - `vine`: a classification label, with possible values including `Y` (0), `N` (1). - `verified_purchase`: a classification label, with possible values including `Y` (0), `N` (1). - `review_headline`: a `string` feature. - `review_body`: a `string` feature. - `review_date`: a `string` feature. #### Automotive_v1_00 - `marketplace`: a `string` feature. - `customer_id`: a `string` feature. - `review_id`: a `string` feature. - `product_id`: a `string` feature. - `product_parent`: a `string` feature. - `product_title`: a `string` feature. - `product_category`: a `string` feature. - `star_rating`: a `int32` feature. - `helpful_votes`: a `int32` feature. - `total_votes`: a `int32` feature. - `vine`: a classification label, with possible values including `Y` (0), `N` (1). - `verified_purchase`: a classification label, with possible values including `Y` (0), `N` (1). - `review_headline`: a `string` feature. - `review_body`: a `string` feature. - `review_date`: a `string` feature. #### Baby_v1_00 - `marketplace`: a `string` feature. - `customer_id`: a `string` feature. - `review_id`: a `string` feature. - `product_id`: a `string` feature. - `product_parent`: a `string` feature. - `product_title`: a `string` feature. - `product_category`: a `string` feature. - `star_rating`: a `int32` feature. - `helpful_votes`: a `int32` feature. - `total_votes`: a `int32` feature. - `vine`: a classification label, with possible values including `Y` (0), `N` (1). - `verified_purchase`: a classification label, with possible values including `Y` (0), `N` (1). - `review_headline`: a `string` feature. - `review_body`: a `string` feature. - `review_date`: a `string` feature. #### Beauty_v1_00 - `marketplace`: a `string` feature. - `customer_id`: a `string` feature. - `review_id`: a `string` feature. - `product_id`: a `string` feature. - `product_parent`: a `string` feature. - `product_title`: a `string` feature. - `product_category`: a `string` feature. - `star_rating`: a `int32` feature. - `helpful_votes`: a `int32` feature. - `total_votes`: a `int32` feature. - `vine`: a classification label, with possible values including `Y` (0), `N` (1). - `verified_purchase`: a classification label, with possible values including `Y` (0), `N` (1). - `review_headline`: a `string` feature. - `review_body`: a `string` feature. - `review_date`: a `string` feature. #### Books_v1_00 - `marketplace`: a `string` feature. - `customer_id`: a `string` feature. - `review_id`: a `string` feature. - `product_id`: a `string` feature. - `product_parent`: a `string` feature. - `product_title`: a `string` feature. - `product_category`: a `string` feature. - `star_rating`: a `int32` feature. - `helpful_votes`: a `int32` feature. - `total_votes`: a `int32` feature. - `vine`: a classification label, with possible values including `Y` (0), `N` (1). - `verified_purchase`: a classification label, with possible values including `Y` (0), `N` (1). - `review_headline`: a `string` feature. - `review_body`: a `string` feature. - `review_date`: a `string` feature. ### Data Splits | name | train | |----------------|-------:| |Apparel_v1_00 | 5906333| |Automotive_v1_00 | 3514942| |Baby_v1_00 | 1752932| |Beauty_v1_00 | 5115666| |Books_v1_00 | 10319090| |Books_v1_01 | 6106719| |Books_v1_02 | 3105520| |Camera_v1_00 | 1801974| |Digital_Ebook_Purchase_v1_00 | 12520722| |Digital_Ebook_Purchase_v1_01 | 5101693| |Digital_Music_Purchase_v1_00 | 1688884| |Digital_Software_v1_00 | 102084| |Digital_Video_Download_v1_00 | 4057147| |Digital_Video_Games_v1_00 | 145431| |Electronics_v1_00 | 3093869| |Furniture_v1_00 | 792113| |Gift_Card_v1_00 | 149086| |Grocery_v1_00 | 2402458| |Health_Personal_Care_v1_00 | 5331449| |Home_Entertainment_v1_00 | 705889| |Home_Improvement_v1_00 | 2634781| |Home_v1_00 | 6221559| |Jewelry_v1_00 | 1767753| |Kitchen_v1_00 | 4880466| |Lawn_and_Garden_v1_00 | 2557288| |Luggage_v1_00 | 348657| |Major_Appliances_v1_00 | 96901| |Mobile_Apps_v1_00 | 5033376| |Mobile_Electronics_v1_00 | 104975| |Music_v1_00 | 4751577| |Musical_Instruments_v1_00 | 904765| |Office_Products_v1_00 | 2642434| |Outdoors_v1_00 | 2302401| |PC_v1_00 | 6908554| |Personal_Care_Appliances_v1_00 | 85981| |Pet_Products_v1_00 | 2643619| |Shoes_v1_00 | 4366916| |Software_v1_00 | 341931| |Sports_v1_00 | 4850360| |Tools_v1_00 | 1741100| |Toys_v1_00 | 4864249| |Video_DVD_v1_00 | 5069140| |Video_Games_v1_00 | 1785997| |Video_v1_00 | 380604| |Watches_v1_00 | 960872| |Wireless_v1_00 | 9002021| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information https://s3.amazonaws.com/amazon-reviews-pds/LICENSE.txt By accessing the Amazon Customer Reviews Library ("Reviews Library"), you agree that the Reviews Library is an Amazon Service subject to the [Amazon.com Conditions of Use](https://www.amazon.com/gp/help/customer/display.html/ref=footer_cou?ie=UTF8&nodeId=508088) and you agree to be bound by them, with the following additional conditions: In addition to the license rights granted under the Conditions of Use, Amazon or its content providers grant you a limited, non-exclusive, non-transferable, non-sublicensable, revocable license to access and use the Reviews Library for purposes of academic research. You may not resell, republish, or make any commercial use of the Reviews Library or its contents, including use of the Reviews Library for commercial research, such as research related to a funding or consultancy contract, internship, or other relationship in which the results are provided for a fee or delivered to a for-profit organization. You may not (a) link or associate content in the Reviews Library with any personal information (including Amazon customer accounts), or (b) attempt to determine the identity of the author of any content in the Reviews Library. If you violate any of the foregoing conditions, your license to access and use the Reviews Library will automatically terminate without prejudice to any of the other rights or remedies Amazon may have. ### Citation Information No citation information. ### Contributions Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset.
amazon_us_reviews
[ "task_categories:summarization", "task_categories:text-generation", "task_categories:fill-mask", "task_categories:text-classification", "task_ids:text-scoring", "task_ids:language-modeling", "task_ids:masked-language-modeling", "task_ids:sentiment-classification", "task_ids:sentiment-scoring", "task_ids:topic-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100M<n<1B", "source_datasets:original", "language:en", "license:other", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["100M<n<1B"], "source_datasets": ["original"], "task_categories": ["summarization", "text-generation", "fill-mask", "text-classification"], "task_ids": ["text-scoring", "language-modeling", "masked-language-modeling", "sentiment-classification", "sentiment-scoring", "topic-classification"], "pretty_name": "Amazon US Reviews", "viewer": false, "dataset_info": [{"config_name": "Books_v1_01", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6997552259, "num_examples": 6106719}], "download_size": 2692708591, "dataset_size": 6997552259}, {"config_name": "Watches_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 458976082, "num_examples": 960872}], "download_size": 162973819, "dataset_size": 458976082}, {"config_name": "Personal_Care_Appliances_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 49036547, "num_examples": 85981}], "download_size": 17634794, "dataset_size": 49036547}, {"config_name": "Mobile_Electronics_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 63293377, "num_examples": 104975}], "download_size": 22870508, "dataset_size": 63293377}, {"config_name": "Digital_Video_Games_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 80176851, "num_examples": 145431}], "download_size": 27442648, "dataset_size": 80176851}, {"config_name": "Digital_Software_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 58782931, "num_examples": 102084}], "download_size": 18997559, "dataset_size": 58782931}, {"config_name": "Major_Appliances_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 67642424, "num_examples": 96901}], "download_size": 24359816, "dataset_size": 67642424}, {"config_name": "Gift_Card_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 47188062, "num_examples": 149086}], "download_size": 12134676, "dataset_size": 47188062}, {"config_name": "Video_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 356264426, "num_examples": 380604}], "download_size": 138929896, "dataset_size": 356264426}, {"config_name": "Luggage_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 167354173, "num_examples": 348657}], "download_size": 60320191, "dataset_size": 167354173}, {"config_name": "Software_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 266020595, "num_examples": 341931}], "download_size": 94010685, "dataset_size": 266020595}, {"config_name": "Video_Games_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1291054668, "num_examples": 1785997}], "download_size": 475199894, "dataset_size": 1291054668}, {"config_name": "Furniture_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 405212374, "num_examples": 792113}], "download_size": 148982796, "dataset_size": 405212374}, {"config_name": "Musical_Instruments_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 518908568, "num_examples": 904765}], "download_size": 193389086, "dataset_size": 518908568}, {"config_name": "Digital_Music_Purchase_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 710546079, "num_examples": 1688884}], "download_size": 253570168, "dataset_size": 710546079}, {"config_name": "Books_v1_02", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3387034903, "num_examples": 3105520}], "download_size": 1329539135, "dataset_size": 3387034903}, {"config_name": "Home_Entertainment_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 534333848, "num_examples": 705889}], "download_size": 193168458, "dataset_size": 534333848}, {"config_name": "Grocery_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1072289473, "num_examples": 2402458}], "download_size": 401337166, "dataset_size": 1072289473}, {"config_name": "Outdoors_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1172986088, "num_examples": 2302401}], "download_size": 448963100, "dataset_size": 1172986088}, {"config_name": "Pet_Products_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1355659812, "num_examples": 2643619}], "download_size": 515815253, "dataset_size": 1355659812}, {"config_name": "Video_DVD_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3953234561, "num_examples": 5069140}], "download_size": 1512355451, "dataset_size": 3953234561}, {"config_name": "Apparel_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2256558450, "num_examples": 5906333}], "download_size": 648641286, "dataset_size": 2256558450}, {"config_name": "PC_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3982684438, "num_examples": 6908554}], "download_size": 1512903923, "dataset_size": 3982684438}, {"config_name": "Tools_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 872273119, "num_examples": 1741100}], "download_size": 333782939, "dataset_size": 872273119}, {"config_name": "Jewelry_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 703275869, "num_examples": 1767753}], "download_size": 247022254, "dataset_size": 703275869}, {"config_name": "Baby_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 956952590, "num_examples": 1752932}], "download_size": 357392893, "dataset_size": 956952590}, {"config_name": "Home_Improvement_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1329688315, "num_examples": 2634781}], "download_size": 503339178, "dataset_size": 1329688315}, {"config_name": "Camera_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1187101912, "num_examples": 1801974}], "download_size": 442653086, "dataset_size": 1187101912}, {"config_name": "Lawn_and_Garden_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1272255987, "num_examples": 2557288}], "download_size": 486772662, "dataset_size": 1272255987}, {"config_name": "Office_Products_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1370685534, "num_examples": 2642434}], "download_size": 512323500, "dataset_size": 1370685534}, {"config_name": "Electronics_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1875406721, "num_examples": 3093869}], "download_size": 698828243, "dataset_size": 1875406721}, {"config_name": "Automotive_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1520191087, "num_examples": 3514942}], "download_size": 582145299, "dataset_size": 1520191087}, {"config_name": "Digital_Video_Download_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1484214187, "num_examples": 4057147}], "download_size": 506979922, "dataset_size": 1484214187}, {"config_name": "Mobile_Apps_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1627857158, "num_examples": 5033376}], "download_size": 557959415, "dataset_size": 1627857158}, {"config_name": "Shoes_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1781283508, "num_examples": 4366916}], "download_size": 642255314, "dataset_size": 1781283508}, {"config_name": "Toys_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2197820069, "num_examples": 4864249}], "download_size": 838451398, "dataset_size": 2197820069}, {"config_name": "Sports_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2241349145, "num_examples": 4850360}], "download_size": 872478735, "dataset_size": 2241349145}, {"config_name": "Kitchen_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2453735305, "num_examples": 4880466}], "download_size": 930744854, "dataset_size": 2453735305}, {"config_name": "Beauty_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2399292506, "num_examples": 5115666}], "download_size": 914070021, "dataset_size": 2399292506}, {"config_name": "Music_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3900138839, "num_examples": 4751577}], "download_size": 1521994296, "dataset_size": 3900138839}, {"config_name": "Health_Personal_Care_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2679427491, "num_examples": 5331449}], "download_size": 1011180212, "dataset_size": 2679427491}, {"config_name": "Digital_Ebook_Purchase_v1_01", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3470453859, "num_examples": 5101693}], "download_size": 1294879074, "dataset_size": 3470453859}, {"config_name": "Home_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2796680249, "num_examples": 6221559}], "download_size": 1081002012, "dataset_size": 2796680249}, {"config_name": "Wireless_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4633213433, "num_examples": 9002021}], "download_size": 1704713674, "dataset_size": 4633213433}, {"config_name": "Books_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7197687124, "num_examples": 10319090}], "download_size": 2740337188, "dataset_size": 7197687124}, {"config_name": "Digital_Ebook_Purchase_v1_00", "features": [{"name": "marketplace", "dtype": "string"}, {"name": "customer_id", "dtype": "string"}, {"name": "review_id", "dtype": "string"}, {"name": "product_id", "dtype": "string"}, {"name": "product_parent", "dtype": "string"}, {"name": "product_title", "dtype": "string"}, {"name": "product_category", "dtype": "string"}, {"name": "star_rating", "dtype": "int32"}, {"name": "helpful_votes", "dtype": "int32"}, {"name": "total_votes", "dtype": "int32"}, {"name": "vine", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "verified_purchase", "dtype": {"class_label": {"names": {"0": "N", "1": "Y"}}}}, {"name": "review_headline", "dtype": "string"}, {"name": "review_body", "dtype": "string"}, {"name": "review_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7302303804, "num_examples": 12520722}], "download_size": 2689739299, "dataset_size": 7302303804}]}
2023-11-02T14:57:03+00:00
[]
[ "en" ]
TAGS #task_categories-summarization #task_categories-text-generation #task_categories-fill-mask #task_categories-text-classification #task_ids-text-scoring #task_ids-language-modeling #task_ids-masked-language-modeling #task_ids-sentiment-classification #task_ids-sentiment-scoring #task_ids-topic-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100M<n<1B #source_datasets-original #language-English #license-other #region-us
Dataset Card for "amazon\_us\_reviews" ====================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: * Point of Contact: * Size of downloaded dataset files: 32377.29 MB * Size of the generated dataset: 82820.19 MB * Total amount of disk used: 115197.49 MB ### Dataset Summary **Defunct:** Dataset "amazon\_us\_reviews" is defunct and no longer accessible due to the decision of data providers. Amazon Customer Reviews (a.k.a. Product Reviews) is one of Amazons iconic products. In a period of over two decades since the first review in 1995, millions of Amazon customers have contributed over a hundred million reviews to express opinions and describe their experiences regarding products on the URL website. This makes Amazon Customer Reviews a rich source of information for academic researchers in the fields of Natural Language Processing (NLP), Information Retrieval (IR), and Machine Learning (ML), amongst others. Accordingly, we are releasing this data to further research in multiple disciplines related to understanding customer product experiences. Specifically, this dataset was constructed to represent a sample of customer evaluations and opinions, variation in the perception of a product across geographical regions, and promotional intent or bias in reviews. Over 130+ million customer reviews are available to researchers as part of this release. The data is available in TSV files in the amazon-reviews-pds S3 bucket in AWS US East Region. Each line in the data files corresponds to an individual review (tab delimited, with no quote and escape characters). Each Dataset contains the following columns : marketplace - 2 letter country code of the marketplace where the review was written. customer\_id - Random identifier that can be used to aggregate reviews written by a single author. review\_id - The unique ID of the review. product\_id - The unique Product ID the review pertains to. In the multilingual dataset the reviews for the same product in different countries can be grouped by the same product\_id. product\_parent - Random identifier that can be used to aggregate reviews for the same product. product\_title - Title of the product. product\_category - Broad product category that can be used to group reviews (also used to group the dataset into coherent parts). star\_rating - The 1-5 star rating of the review. helpful\_votes - Number of helpful votes. total\_votes - Number of total votes the review received. vine - Review was written as part of the Vine program. verified\_purchase - The review is on a verified purchase. review\_headline - The title of the review. review\_body - The review text. review\_date - The date the review was written. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### Apparel\_v1\_00 * Size of downloaded dataset files: 648.64 MB * Size of the generated dataset: 2254.36 MB * Total amount of disk used: 2903.00 MB An example of 'train' looks as follows. #### Automotive\_v1\_00 * Size of downloaded dataset files: 582.15 MB * Size of the generated dataset: 1518.88 MB * Total amount of disk used: 2101.03 MB An example of 'train' looks as follows. #### Baby\_v1\_00 * Size of downloaded dataset files: 357.40 MB * Size of the generated dataset: 956.30 MB * Total amount of disk used: 1313.70 MB An example of 'train' looks as follows. #### Beauty\_v1\_00 * Size of downloaded dataset files: 914.08 MB * Size of the generated dataset: 2397.39 MB * Total amount of disk used: 3311.47 MB An example of 'train' looks as follows. #### Books\_v1\_00 * Size of downloaded dataset files: 2740.34 MB * Size of the generated dataset: 7193.86 MB * Total amount of disk used: 9934.20 MB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### Apparel\_v1\_00 * 'marketplace': a 'string' feature. * 'customer\_id': a 'string' feature. * 'review\_id': a 'string' feature. * 'product\_id': a 'string' feature. * 'product\_parent': a 'string' feature. * 'product\_title': a 'string' feature. * 'product\_category': a 'string' feature. * 'star\_rating': a 'int32' feature. * 'helpful\_votes': a 'int32' feature. * 'total\_votes': a 'int32' feature. * 'vine': a classification label, with possible values including 'Y' (0), 'N' (1). * 'verified\_purchase': a classification label, with possible values including 'Y' (0), 'N' (1). * 'review\_headline': a 'string' feature. * 'review\_body': a 'string' feature. * 'review\_date': a 'string' feature. #### Automotive\_v1\_00 * 'marketplace': a 'string' feature. * 'customer\_id': a 'string' feature. * 'review\_id': a 'string' feature. * 'product\_id': a 'string' feature. * 'product\_parent': a 'string' feature. * 'product\_title': a 'string' feature. * 'product\_category': a 'string' feature. * 'star\_rating': a 'int32' feature. * 'helpful\_votes': a 'int32' feature. * 'total\_votes': a 'int32' feature. * 'vine': a classification label, with possible values including 'Y' (0), 'N' (1). * 'verified\_purchase': a classification label, with possible values including 'Y' (0), 'N' (1). * 'review\_headline': a 'string' feature. * 'review\_body': a 'string' feature. * 'review\_date': a 'string' feature. #### Baby\_v1\_00 * 'marketplace': a 'string' feature. * 'customer\_id': a 'string' feature. * 'review\_id': a 'string' feature. * 'product\_id': a 'string' feature. * 'product\_parent': a 'string' feature. * 'product\_title': a 'string' feature. * 'product\_category': a 'string' feature. * 'star\_rating': a 'int32' feature. * 'helpful\_votes': a 'int32' feature. * 'total\_votes': a 'int32' feature. * 'vine': a classification label, with possible values including 'Y' (0), 'N' (1). * 'verified\_purchase': a classification label, with possible values including 'Y' (0), 'N' (1). * 'review\_headline': a 'string' feature. * 'review\_body': a 'string' feature. * 'review\_date': a 'string' feature. #### Beauty\_v1\_00 * 'marketplace': a 'string' feature. * 'customer\_id': a 'string' feature. * 'review\_id': a 'string' feature. * 'product\_id': a 'string' feature. * 'product\_parent': a 'string' feature. * 'product\_title': a 'string' feature. * 'product\_category': a 'string' feature. * 'star\_rating': a 'int32' feature. * 'helpful\_votes': a 'int32' feature. * 'total\_votes': a 'int32' feature. * 'vine': a classification label, with possible values including 'Y' (0), 'N' (1). * 'verified\_purchase': a classification label, with possible values including 'Y' (0), 'N' (1). * 'review\_headline': a 'string' feature. * 'review\_body': a 'string' feature. * 'review\_date': a 'string' feature. #### Books\_v1\_00 * 'marketplace': a 'string' feature. * 'customer\_id': a 'string' feature. * 'review\_id': a 'string' feature. * 'product\_id': a 'string' feature. * 'product\_parent': a 'string' feature. * 'product\_title': a 'string' feature. * 'product\_category': a 'string' feature. * 'star\_rating': a 'int32' feature. * 'helpful\_votes': a 'int32' feature. * 'total\_votes': a 'int32' feature. * 'vine': a classification label, with possible values including 'Y' (0), 'N' (1). * 'verified\_purchase': a classification label, with possible values including 'Y' (0), 'N' (1). * 'review\_headline': a 'string' feature. * 'review\_body': a 'string' feature. * 'review\_date': a 'string' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information URL By accessing the Amazon Customer Reviews Library ("Reviews Library"), you agree that the Reviews Library is an Amazon Service subject to the URL Conditions of Use and you agree to be bound by them, with the following additional conditions: In addition to the license rights granted under the Conditions of Use, Amazon or its content providers grant you a limited, non-exclusive, non-transferable, non-sublicensable, revocable license to access and use the Reviews Library for purposes of academic research. You may not resell, republish, or make any commercial use of the Reviews Library or its contents, including use of the Reviews Library for commercial research, such as research related to a funding or consultancy contract, internship, or other relationship in which the results are provided for a fee or delivered to a for-profit organization. You may not (a) link or associate content in the Reviews Library with any personal information (including Amazon customer accounts), or (b) attempt to determine the identity of the author of any content in the Reviews Library. If you violate any of the foregoing conditions, your license to access and use the Reviews Library will automatically terminate without prejudice to any of the other rights or remedies Amazon may have. No citation information. ### Contributions Thanks to @joeddav for adding this dataset.
[ "### Dataset Summary\n\n\n\n**Defunct:** Dataset \"amazon\\_us\\_reviews\" is defunct and no longer accessible due to the decision of data providers.\n\n\n\nAmazon Customer Reviews (a.k.a. Product Reviews) is one of Amazons iconic products. In a period of over two decades since the first review in 1995, millions of Amazon customers have contributed over a hundred million reviews to express opinions and describe their experiences regarding products on the URL website. This makes Amazon Customer Reviews a rich source of information for academic researchers in the fields of Natural Language Processing (NLP), Information Retrieval (IR), and Machine Learning (ML), amongst others. Accordingly, we are releasing this data to further research in multiple disciplines related to understanding customer product experiences. Specifically, this dataset was constructed to represent a sample of customer evaluations and opinions, variation in the perception of a product across geographical regions, and promotional intent or bias in reviews.\nOver 130+ million customer reviews are available to researchers as part of this release. The data is available in TSV files in the amazon-reviews-pds S3 bucket in AWS US East Region. Each line in the data files corresponds to an individual review (tab delimited, with no quote and escape characters).\nEach Dataset contains the following columns :\nmarketplace - 2 letter country code of the marketplace where the review was written.\ncustomer\\_id - Random identifier that can be used to aggregate reviews written by a single author.\nreview\\_id - The unique ID of the review.\nproduct\\_id - The unique Product ID the review pertains to. In the multilingual dataset the reviews\nfor the same product in different countries can be grouped by the same product\\_id.\nproduct\\_parent - Random identifier that can be used to aggregate reviews for the same product.\nproduct\\_title - Title of the product.\nproduct\\_category - Broad product category that can be used to group reviews\n(also used to group the dataset into coherent parts).\nstar\\_rating - The 1-5 star rating of the review.\nhelpful\\_votes - Number of helpful votes.\ntotal\\_votes - Number of total votes the review received.\nvine - Review was written as part of the Vine program.\nverified\\_purchase - The review is on a verified purchase.\nreview\\_headline - The title of the review.\nreview\\_body - The review text.\nreview\\_date - The date the review was written.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### Apparel\\_v1\\_00\n\n\n* Size of downloaded dataset files: 648.64 MB\n* Size of the generated dataset: 2254.36 MB\n* Total amount of disk used: 2903.00 MB\n\n\nAn example of 'train' looks as follows.", "#### Automotive\\_v1\\_00\n\n\n* Size of downloaded dataset files: 582.15 MB\n* Size of the generated dataset: 1518.88 MB\n* Total amount of disk used: 2101.03 MB\n\n\nAn example of 'train' looks as follows.", "#### Baby\\_v1\\_00\n\n\n* Size of downloaded dataset files: 357.40 MB\n* Size of the generated dataset: 956.30 MB\n* Total amount of disk used: 1313.70 MB\n\n\nAn example of 'train' looks as follows.", "#### Beauty\\_v1\\_00\n\n\n* Size of downloaded dataset files: 914.08 MB\n* Size of the generated dataset: 2397.39 MB\n* Total amount of disk used: 3311.47 MB\n\n\nAn example of 'train' looks as follows.", "#### Books\\_v1\\_00\n\n\n* Size of downloaded dataset files: 2740.34 MB\n* Size of the generated dataset: 7193.86 MB\n* Total amount of disk used: 9934.20 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### Apparel\\_v1\\_00\n\n\n* 'marketplace': a 'string' feature.\n* 'customer\\_id': a 'string' feature.\n* 'review\\_id': a 'string' feature.\n* 'product\\_id': a 'string' feature.\n* 'product\\_parent': a 'string' feature.\n* 'product\\_title': a 'string' feature.\n* 'product\\_category': a 'string' feature.\n* 'star\\_rating': a 'int32' feature.\n* 'helpful\\_votes': a 'int32' feature.\n* 'total\\_votes': a 'int32' feature.\n* 'vine': a classification label, with possible values including 'Y' (0), 'N' (1).\n* 'verified\\_purchase': a classification label, with possible values including 'Y' (0), 'N' (1).\n* 'review\\_headline': a 'string' feature.\n* 'review\\_body': a 'string' feature.\n* 'review\\_date': a 'string' feature.", "#### Automotive\\_v1\\_00\n\n\n* 'marketplace': a 'string' feature.\n* 'customer\\_id': a 'string' feature.\n* 'review\\_id': a 'string' feature.\n* 'product\\_id': a 'string' feature.\n* 'product\\_parent': a 'string' feature.\n* 'product\\_title': a 'string' feature.\n* 'product\\_category': a 'string' feature.\n* 'star\\_rating': a 'int32' feature.\n* 'helpful\\_votes': a 'int32' feature.\n* 'total\\_votes': a 'int32' feature.\n* 'vine': a classification label, with possible values including 'Y' (0), 'N' (1).\n* 'verified\\_purchase': a classification label, with possible values including 'Y' (0), 'N' (1).\n* 'review\\_headline': a 'string' feature.\n* 'review\\_body': a 'string' feature.\n* 'review\\_date': a 'string' feature.", "#### Baby\\_v1\\_00\n\n\n* 'marketplace': a 'string' feature.\n* 'customer\\_id': a 'string' feature.\n* 'review\\_id': a 'string' feature.\n* 'product\\_id': a 'string' feature.\n* 'product\\_parent': a 'string' feature.\n* 'product\\_title': a 'string' feature.\n* 'product\\_category': a 'string' feature.\n* 'star\\_rating': a 'int32' feature.\n* 'helpful\\_votes': a 'int32' feature.\n* 'total\\_votes': a 'int32' feature.\n* 'vine': a classification label, with possible values including 'Y' (0), 'N' (1).\n* 'verified\\_purchase': a classification label, with possible values including 'Y' (0), 'N' (1).\n* 'review\\_headline': a 'string' feature.\n* 'review\\_body': a 'string' feature.\n* 'review\\_date': a 'string' feature.", "#### Beauty\\_v1\\_00\n\n\n* 'marketplace': a 'string' feature.\n* 'customer\\_id': a 'string' feature.\n* 'review\\_id': a 'string' feature.\n* 'product\\_id': a 'string' feature.\n* 'product\\_parent': a 'string' feature.\n* 'product\\_title': a 'string' feature.\n* 'product\\_category': a 'string' feature.\n* 'star\\_rating': a 'int32' feature.\n* 'helpful\\_votes': a 'int32' feature.\n* 'total\\_votes': a 'int32' feature.\n* 'vine': a classification label, with possible values including 'Y' (0), 'N' (1).\n* 'verified\\_purchase': a classification label, with possible values including 'Y' (0), 'N' (1).\n* 'review\\_headline': a 'string' feature.\n* 'review\\_body': a 'string' feature.\n* 'review\\_date': a 'string' feature.", "#### Books\\_v1\\_00\n\n\n* 'marketplace': a 'string' feature.\n* 'customer\\_id': a 'string' feature.\n* 'review\\_id': a 'string' feature.\n* 'product\\_id': a 'string' feature.\n* 'product\\_parent': a 'string' feature.\n* 'product\\_title': a 'string' feature.\n* 'product\\_category': a 'string' feature.\n* 'star\\_rating': a 'int32' feature.\n* 'helpful\\_votes': a 'int32' feature.\n* 'total\\_votes': a 'int32' feature.\n* 'vine': a classification label, with possible values including 'Y' (0), 'N' (1).\n* 'verified\\_purchase': a classification label, with possible values including 'Y' (0), 'N' (1).\n* 'review\\_headline': a 'string' feature.\n* 'review\\_body': a 'string' feature.\n* 'review\\_date': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nURL\n\n\nBy accessing the Amazon Customer Reviews Library (\"Reviews Library\"), you agree that the\nReviews Library is an Amazon Service subject to the URL Conditions of Use\nand you agree to be bound by them, with the following additional conditions:\n\n\nIn addition to the license rights granted under the Conditions of Use,\nAmazon or its content providers grant you a limited, non-exclusive, non-transferable,\nnon-sublicensable, revocable license to access and use the Reviews Library\nfor purposes of academic research.\nYou may not resell, republish, or make any commercial use of the Reviews Library\nor its contents, including use of the Reviews Library for commercial research,\nsuch as research related to a funding or consultancy contract, internship, or\nother relationship in which the results are provided for a fee or delivered\nto a for-profit organization. You may not (a) link or associate content\nin the Reviews Library with any personal information (including Amazon customer accounts),\nor (b) attempt to determine the identity of the author of any content in the\nReviews Library.\nIf you violate any of the foregoing conditions, your license to access and use the\nReviews Library will automatically terminate without prejudice to any of the\nother rights or remedies Amazon may have.\n\n\nNo citation information.", "### Contributions\n\n\nThanks to @joeddav for adding this dataset." ]
[ "TAGS\n#task_categories-summarization #task_categories-text-generation #task_categories-fill-mask #task_categories-text-classification #task_ids-text-scoring #task_ids-language-modeling #task_ids-masked-language-modeling #task_ids-sentiment-classification #task_ids-sentiment-scoring #task_ids-topic-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100M<n<1B #source_datasets-original #language-English #license-other #region-us \n", "### Dataset Summary\n\n\n\n**Defunct:** Dataset \"amazon\\_us\\_reviews\" is defunct and no longer accessible due to the decision of data providers.\n\n\n\nAmazon Customer Reviews (a.k.a. Product Reviews) is one of Amazons iconic products. In a period of over two decades since the first review in 1995, millions of Amazon customers have contributed over a hundred million reviews to express opinions and describe their experiences regarding products on the URL website. This makes Amazon Customer Reviews a rich source of information for academic researchers in the fields of Natural Language Processing (NLP), Information Retrieval (IR), and Machine Learning (ML), amongst others. Accordingly, we are releasing this data to further research in multiple disciplines related to understanding customer product experiences. Specifically, this dataset was constructed to represent a sample of customer evaluations and opinions, variation in the perception of a product across geographical regions, and promotional intent or bias in reviews.\nOver 130+ million customer reviews are available to researchers as part of this release. The data is available in TSV files in the amazon-reviews-pds S3 bucket in AWS US East Region. Each line in the data files corresponds to an individual review (tab delimited, with no quote and escape characters).\nEach Dataset contains the following columns :\nmarketplace - 2 letter country code of the marketplace where the review was written.\ncustomer\\_id - Random identifier that can be used to aggregate reviews written by a single author.\nreview\\_id - The unique ID of the review.\nproduct\\_id - The unique Product ID the review pertains to. In the multilingual dataset the reviews\nfor the same product in different countries can be grouped by the same product\\_id.\nproduct\\_parent - Random identifier that can be used to aggregate reviews for the same product.\nproduct\\_title - Title of the product.\nproduct\\_category - Broad product category that can be used to group reviews\n(also used to group the dataset into coherent parts).\nstar\\_rating - The 1-5 star rating of the review.\nhelpful\\_votes - Number of helpful votes.\ntotal\\_votes - Number of total votes the review received.\nvine - Review was written as part of the Vine program.\nverified\\_purchase - The review is on a verified purchase.\nreview\\_headline - The title of the review.\nreview\\_body - The review text.\nreview\\_date - The date the review was written.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### Apparel\\_v1\\_00\n\n\n* Size of downloaded dataset files: 648.64 MB\n* Size of the generated dataset: 2254.36 MB\n* Total amount of disk used: 2903.00 MB\n\n\nAn example of 'train' looks as follows.", "#### Automotive\\_v1\\_00\n\n\n* Size of downloaded dataset files: 582.15 MB\n* Size of the generated dataset: 1518.88 MB\n* Total amount of disk used: 2101.03 MB\n\n\nAn example of 'train' looks as follows.", "#### Baby\\_v1\\_00\n\n\n* Size of downloaded dataset files: 357.40 MB\n* Size of the generated dataset: 956.30 MB\n* Total amount of disk used: 1313.70 MB\n\n\nAn example of 'train' looks as follows.", "#### Beauty\\_v1\\_00\n\n\n* Size of downloaded dataset files: 914.08 MB\n* Size of the generated dataset: 2397.39 MB\n* Total amount of disk used: 3311.47 MB\n\n\nAn example of 'train' looks as follows.", "#### Books\\_v1\\_00\n\n\n* Size of downloaded dataset files: 2740.34 MB\n* Size of the generated dataset: 7193.86 MB\n* Total amount of disk used: 9934.20 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### Apparel\\_v1\\_00\n\n\n* 'marketplace': a 'string' feature.\n* 'customer\\_id': a 'string' feature.\n* 'review\\_id': a 'string' feature.\n* 'product\\_id': a 'string' feature.\n* 'product\\_parent': a 'string' feature.\n* 'product\\_title': a 'string' feature.\n* 'product\\_category': a 'string' feature.\n* 'star\\_rating': a 'int32' feature.\n* 'helpful\\_votes': a 'int32' feature.\n* 'total\\_votes': a 'int32' feature.\n* 'vine': a classification label, with possible values including 'Y' (0), 'N' (1).\n* 'verified\\_purchase': a classification label, with possible values including 'Y' (0), 'N' (1).\n* 'review\\_headline': a 'string' feature.\n* 'review\\_body': a 'string' feature.\n* 'review\\_date': a 'string' feature.", "#### Automotive\\_v1\\_00\n\n\n* 'marketplace': a 'string' feature.\n* 'customer\\_id': a 'string' feature.\n* 'review\\_id': a 'string' feature.\n* 'product\\_id': a 'string' feature.\n* 'product\\_parent': a 'string' feature.\n* 'product\\_title': a 'string' feature.\n* 'product\\_category': a 'string' feature.\n* 'star\\_rating': a 'int32' feature.\n* 'helpful\\_votes': a 'int32' feature.\n* 'total\\_votes': a 'int32' feature.\n* 'vine': a classification label, with possible values including 'Y' (0), 'N' (1).\n* 'verified\\_purchase': a classification label, with possible values including 'Y' (0), 'N' (1).\n* 'review\\_headline': a 'string' feature.\n* 'review\\_body': a 'string' feature.\n* 'review\\_date': a 'string' feature.", "#### Baby\\_v1\\_00\n\n\n* 'marketplace': a 'string' feature.\n* 'customer\\_id': a 'string' feature.\n* 'review\\_id': a 'string' feature.\n* 'product\\_id': a 'string' feature.\n* 'product\\_parent': a 'string' feature.\n* 'product\\_title': a 'string' feature.\n* 'product\\_category': a 'string' feature.\n* 'star\\_rating': a 'int32' feature.\n* 'helpful\\_votes': a 'int32' feature.\n* 'total\\_votes': a 'int32' feature.\n* 'vine': a classification label, with possible values including 'Y' (0), 'N' (1).\n* 'verified\\_purchase': a classification label, with possible values including 'Y' (0), 'N' (1).\n* 'review\\_headline': a 'string' feature.\n* 'review\\_body': a 'string' feature.\n* 'review\\_date': a 'string' feature.", "#### Beauty\\_v1\\_00\n\n\n* 'marketplace': a 'string' feature.\n* 'customer\\_id': a 'string' feature.\n* 'review\\_id': a 'string' feature.\n* 'product\\_id': a 'string' feature.\n* 'product\\_parent': a 'string' feature.\n* 'product\\_title': a 'string' feature.\n* 'product\\_category': a 'string' feature.\n* 'star\\_rating': a 'int32' feature.\n* 'helpful\\_votes': a 'int32' feature.\n* 'total\\_votes': a 'int32' feature.\n* 'vine': a classification label, with possible values including 'Y' (0), 'N' (1).\n* 'verified\\_purchase': a classification label, with possible values including 'Y' (0), 'N' (1).\n* 'review\\_headline': a 'string' feature.\n* 'review\\_body': a 'string' feature.\n* 'review\\_date': a 'string' feature.", "#### Books\\_v1\\_00\n\n\n* 'marketplace': a 'string' feature.\n* 'customer\\_id': a 'string' feature.\n* 'review\\_id': a 'string' feature.\n* 'product\\_id': a 'string' feature.\n* 'product\\_parent': a 'string' feature.\n* 'product\\_title': a 'string' feature.\n* 'product\\_category': a 'string' feature.\n* 'star\\_rating': a 'int32' feature.\n* 'helpful\\_votes': a 'int32' feature.\n* 'total\\_votes': a 'int32' feature.\n* 'vine': a classification label, with possible values including 'Y' (0), 'N' (1).\n* 'verified\\_purchase': a classification label, with possible values including 'Y' (0), 'N' (1).\n* 'review\\_headline': a 'string' feature.\n* 'review\\_body': a 'string' feature.\n* 'review\\_date': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nURL\n\n\nBy accessing the Amazon Customer Reviews Library (\"Reviews Library\"), you agree that the\nReviews Library is an Amazon Service subject to the URL Conditions of Use\nand you agree to be bound by them, with the following additional conditions:\n\n\nIn addition to the license rights granted under the Conditions of Use,\nAmazon or its content providers grant you a limited, non-exclusive, non-transferable,\nnon-sublicensable, revocable license to access and use the Reviews Library\nfor purposes of academic research.\nYou may not resell, republish, or make any commercial use of the Reviews Library\nor its contents, including use of the Reviews Library for commercial research,\nsuch as research related to a funding or consultancy contract, internship, or\nother relationship in which the results are provided for a fee or delivered\nto a for-profit organization. You may not (a) link or associate content\nin the Reviews Library with any personal information (including Amazon customer accounts),\nor (b) attempt to determine the identity of the author of any content in the\nReviews Library.\nIf you violate any of the foregoing conditions, your license to access and use the\nReviews Library will automatically terminate without prejudice to any of the\nother rights or remedies Amazon may have.\n\n\nNo citation information.", "### Contributions\n\n\nThanks to @joeddav for adding this dataset." ]
[ 172, 569, 10, 11, 6, 62, 62, 62, 62, 62, 17, 258, 258, 256, 256, 256, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 287, 17 ]
[ "passage: TAGS\n#task_categories-summarization #task_categories-text-generation #task_categories-fill-mask #task_categories-text-classification #task_ids-text-scoring #task_ids-language-modeling #task_ids-masked-language-modeling #task_ids-sentiment-classification #task_ids-sentiment-scoring #task_ids-topic-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100M<n<1B #source_datasets-original #language-English #license-other #region-us \n", "passage: ### Dataset Summary\n\n\n\n**Defunct:** Dataset \"amazon\\_us\\_reviews\" is defunct and no longer accessible due to the decision of data providers.\n\n\n\nAmazon Customer Reviews (a.k.a. Product Reviews) is one of Amazons iconic products. In a period of over two decades since the first review in 1995, millions of Amazon customers have contributed over a hundred million reviews to express opinions and describe their experiences regarding products on the URL website. This makes Amazon Customer Reviews a rich source of information for academic researchers in the fields of Natural Language Processing (NLP), Information Retrieval (IR), and Machine Learning (ML), amongst others. Accordingly, we are releasing this data to further research in multiple disciplines related to understanding customer product experiences. Specifically, this dataset was constructed to represent a sample of customer evaluations and opinions, variation in the perception of a product across geographical regions, and promotional intent or bias in reviews.\nOver 130+ million customer reviews are available to researchers as part of this release. The data is available in TSV files in the amazon-reviews-pds S3 bucket in AWS US East Region. Each line in the data files corresponds to an individual review (tab delimited, with no quote and escape characters).\nEach Dataset contains the following columns :\nmarketplace - 2 letter country code of the marketplace where the review was written.\ncustomer\\_id - Random identifier that can be used to aggregate reviews written by a single author.\nreview\\_id - The unique ID of the review.\nproduct\\_id - The unique Product ID the review pertains to. In the multilingual dataset the reviews\nfor the same product in different countries can be grouped by the same product\\_id.\nproduct\\_parent - Random identifier that can be used to aggregate reviews for the same product.\nproduct\\_title - Title of the product.\nproduct\\_category - Broad product category that can be used to group reviews\n(also used to group the dataset into coherent parts).\nstar\\_rating - The 1-5 star rating of the review.\nhelpful\\_votes - Number of helpful votes.\ntotal\\_votes - Number of total votes the review received.\nvine - Review was written as part of the Vine program.\nverified\\_purchase - The review is on a verified purchase.\nreview\\_headline - The title of the review.\nreview\\_body - The review text.\nreview\\_date - The date the review was written.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### Apparel\\_v1\\_00\n\n\n* Size of downloaded dataset files: 648.64 MB\n* Size of the generated dataset: 2254.36 MB\n* Total amount of disk used: 2903.00 MB\n\n\nAn example of 'train' looks as follows.#### Automotive\\_v1\\_00\n\n\n* Size of downloaded dataset files: 582.15 MB\n* Size of the generated dataset: 1518.88 MB\n* Total amount of disk used: 2101.03 MB\n\n\nAn example of 'train' looks as follows.#### Baby\\_v1\\_00\n\n\n* Size of downloaded dataset files: 357.40 MB\n* Size of the generated dataset: 956.30 MB\n* Total amount of disk used: 1313.70 MB\n\n\nAn example of 'train' looks as follows.#### Beauty\\_v1\\_00\n\n\n* Size of downloaded dataset files: 914.08 MB\n* Size of the generated dataset: 2397.39 MB\n* Total amount of disk used: 3311.47 MB\n\n\nAn example of 'train' looks as follows.#### Books\\_v1\\_00\n\n\n* Size of downloaded dataset files: 2740.34 MB\n* Size of the generated dataset: 7193.86 MB\n* Total amount of disk used: 9934.20 MB\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.", "passage: #### Apparel\\_v1\\_00\n\n\n* 'marketplace': a 'string' feature.\n* 'customer\\_id': a 'string' feature.\n* 'review\\_id': a 'string' feature.\n* 'product\\_id': a 'string' feature.\n* 'product\\_parent': a 'string' feature.\n* 'product\\_title': a 'string' feature.\n* 'product\\_category': a 'string' feature.\n* 'star\\_rating': a 'int32' feature.\n* 'helpful\\_votes': a 'int32' feature.\n* 'total\\_votes': a 'int32' feature.\n* 'vine': a classification label, with possible values including 'Y' (0), 'N' (1).\n* 'verified\\_purchase': a classification label, with possible values including 'Y' (0), 'N' (1).\n* 'review\\_headline': a 'string' feature.\n* 'review\\_body': a 'string' feature.\n* 'review\\_date': a 'string' feature.#### Automotive\\_v1\\_00\n\n\n* 'marketplace': a 'string' feature.\n* 'customer\\_id': a 'string' feature.\n* 'review\\_id': a 'string' feature.\n* 'product\\_id': a 'string' feature.\n* 'product\\_parent': a 'string' feature.\n* 'product\\_title': a 'string' feature.\n* 'product\\_category': a 'string' feature.\n* 'star\\_rating': a 'int32' feature.\n* 'helpful\\_votes': a 'int32' feature.\n* 'total\\_votes': a 'int32' feature.\n* 'vine': a classification label, with possible values including 'Y' (0), 'N' (1).\n* 'verified\\_purchase': a classification label, with possible values including 'Y' (0), 'N' (1).\n* 'review\\_headline': a 'string' feature.\n* 'review\\_body': a 'string' feature.\n* 'review\\_date': a 'string' feature.", "passage: #### Baby\\_v1\\_00\n\n\n* 'marketplace': a 'string' feature.\n* 'customer\\_id': a 'string' feature.\n* 'review\\_id': a 'string' feature.\n* 'product\\_id': a 'string' feature.\n* 'product\\_parent': a 'string' feature.\n* 'product\\_title': a 'string' feature.\n* 'product\\_category': a 'string' feature.\n* 'star\\_rating': a 'int32' feature.\n* 'helpful\\_votes': a 'int32' feature.\n* 'total\\_votes': a 'int32' feature.\n* 'vine': a classification label, with possible values including 'Y' (0), 'N' (1).\n* 'verified\\_purchase': a classification label, with possible values including 'Y' (0), 'N' (1).\n* 'review\\_headline': a 'string' feature.\n* 'review\\_body': a 'string' feature.\n* 'review\\_date': a 'string' feature.#### Beauty\\_v1\\_00\n\n\n* 'marketplace': a 'string' feature.\n* 'customer\\_id': a 'string' feature.\n* 'review\\_id': a 'string' feature.\n* 'product\\_id': a 'string' feature.\n* 'product\\_parent': a 'string' feature.\n* 'product\\_title': a 'string' feature.\n* 'product\\_category': a 'string' feature.\n* 'star\\_rating': a 'int32' feature.\n* 'helpful\\_votes': a 'int32' feature.\n* 'total\\_votes': a 'int32' feature.\n* 'vine': a classification label, with possible values including 'Y' (0), 'N' (1).\n* 'verified\\_purchase': a classification label, with possible values including 'Y' (0), 'N' (1).\n* 'review\\_headline': a 'string' feature.\n* 'review\\_body': a 'string' feature.\n* 'review\\_date': a 'string' feature." ]
e969d0132f4dd28c2939d55be34f1788c00ccfe7
# Dataset Card for AmbigQA: Answering Ambiguous Open-domain Questions ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - [**Homepage:**](https://nlp.cs.washington.edu/ambigqa/) - [**Repository:**](https://github.com/shmsw25/AmbigQA) - [**Paper:**](https://arxiv.org/pdf/2004.10645.pdf) ### Dataset Summary AmbigNQ, a dataset covering 14,042 questions from NQ-open, an existing open-domain QA benchmark. We find that over half of the questions in NQ-open are ambiguous. The types of ambiguity are diverse and sometimes subtle, many of which are only apparent after examining evidence provided by a very large text corpus. AMBIGNQ, a dataset with 14,042 annotations on NQ-OPEN questions containing diverse types of ambiguity. We provide two distributions of our new dataset AmbigNQ: a `full` version with all annotation metadata and a `light` version with only inputs and outputs. ### Supported Tasks and Leaderboards `question-answering` ### Languages English ## Dataset Structure ### Data Instances An example from the data set looks as follows: ``` {'annotations': {'answer': [[]], 'qaPairs': [{'answer': [['April 19, 1987'], ['December 17, 1989']], 'question': ['When did the Simpsons first air on television as an animated short on the Tracey Ullman Show?', 'When did the Simpsons first air as a half-hour prime time show?']}], 'type': ['multipleQAs']}, 'id': '-4469503464110108318', 'nq_answer': ['December 17 , 1989'], 'nq_doc_title': 'The Simpsons', 'question': 'When did the simpsons first air on television?', 'used_queries': {'query': ['When did the simpsons first air on television?'], 'results': [{'snippet': ['The <b>Simpsons</b> is an American animated <b>television</b> sitcom starring the animated \nSimpson family, ... Since its <b>debut</b> on December 17, 1989, the show <b>has</b> \nbroadcast 673 episodes and its 30th season started ... The <b>Simpsons first</b> season \n<b>was</b> the Fox network&#39;s <b>first TV</b> series to rank among a season&#39;s top 30 highest-\nrated shows.', 'The <b>Simpsons</b> is an American animated sitcom created by Matt Groening for the \nFox ... Since its <b>debut</b> on December 17, 1989, 674 episodes of The <b>Simpsons</b> \nhave been broadcast. ... When producer James L. Brooks <b>was</b> working on the \n<b>television</b> variety show The Tracey Ullman Show, he decided to include small \nanimated&nbsp;...', '... in shorts from The Tracey Ullman Show as their <b>television debut</b> in 1987. The \n<b>Simpsons</b> shorts are a series of animated shorts that <b>aired</b> as a recurring \nsegment on Fox variety <b>television</b> series The Tracey ... The final short to <b>air was</b> &quot;\n<b>TV Simpsons</b>&quot;, originally airing on May 14, 1989. The <b>Simpsons</b> later debuted on\n&nbsp;...', 'The <b>first</b> season of the American animated <b>television</b> series The <b>Simpsons</b> \noriginally <b>aired</b> on the Fox network between December 17, 1989, and May 13, \n1990, beginning with the Christmas special &quot;<b>Simpsons</b> Roasting on an Open Fire\n&quot;. The executive producers for the <b>first</b> production season <b>were</b> Matt Groening,&nbsp;...', 'The <b>Simpsons</b> is an American animated <b>television</b> sitcom created by Matt \nGroening for the Fox ... Since its <b>debut</b> on December 17, 1989, The <b>Simpsons</b> \n<b>has</b> broadcast 674 episodes. The show holds several American <b>television</b> \nlongevity&nbsp;...', 'The opening sequence of the American animated <b>television</b> series The <b>Simpsons</b> \nis among the most popular opening sequences in <b>television</b> and is accompanied \nby one of <b>television&#39;s</b> most recognizable theme songs. The <b>first</b> episode to use \nthis intro <b>was</b> the series&#39; second episode &quot;Bart the ... <b>was</b> the <b>first</b> episode of The \n<b>Simpsons</b> to <b>air</b> in 720p high-definition <b>television</b>,&nbsp;...', '&quot;<b>Simpsons</b> Roasting on an Open Fire&quot;, titled onscreen as &quot;The <b>Simpsons</b> \nChristmas Special&quot;, is the premiere episode of the American animated <b>TV</b> series \nThe <b>Simpsons</b>, ... The show <b>was</b> originally intended to <b>debut</b> earlier in 1989 with &quot;\nSome Enchanted Evening&quot;, but due to animation problems with that episode, the \nshow&nbsp;...', '&quot;Stark Raving Dad&quot; is the <b>first</b> episode of the third season of the American \nanimated <b>television</b> series The <b>Simpsons</b>. It <b>first aired</b> on the Fox network in the \nUnited States on September 19, 1991. ... The <b>Simpsons was</b> the second highest \nrated show on Fox the week it <b>aired</b>, behind Married... with Children. &quot;Stark \nRaving Dad,&quot;&nbsp;...', 'The <b>Simpsons</b>&#39; twentieth season <b>aired</b> on Fox from September 28, 2008 to May \n17, 2009. With this season, the show tied Gunsmoke as the longest-running \nAmerican primetime <b>television</b> series in terms of total number ... It <b>was</b> the <b>first</b>-\never episode of the show to <b>air</b> in Europe before being seen in the United States.', 'The animated <b>TV</b> show The <b>Simpsons</b> is an American English language \nanimated sitcom which ... The <b>Simpsons was</b> dubbed for the <b>first</b> time in Punjabi \nand <b>aired</b> on Geo <b>TV</b> in Pakistan. The name of the localised Punjabi version is \nTedi Sim&nbsp;...'], 'title': ['History of The Simpsons', 'The Simpsons', 'The Simpsons shorts', 'The Simpsons (season 1)', 'List of The Simpsons episodes', 'The Simpsons opening sequence', 'Simpsons Roasting on an Open Fire', 'Stark Raving Dad', 'The Simpsons (season 20)', 'Non-English versions of The Simpsons']}]}, 'viewed_doc_titles': ['The Simpsons']} ``` ### Data Fields Full ``` {'id': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'annotations': Sequence(feature={'type': Value(dtype='string', id=None), 'answer': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'qaPairs': Sequence(feature={'question': Value(dtype='string', id=None), 'answer': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, length=-1, id=None)}, length=-1, id=None), 'viewed_doc_titles': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'used_queries': Sequence(feature={'query': Value(dtype='string', id=None), 'results': Sequence(feature={'title': Value(dtype='string', id=None), 'snippet': Value(dtype='string', id=None)}, length=-1, id=None)}, length=-1, id=None), 'nq_answer': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'nq_doc_title': Value(dtype='string', id=None)} ``` In the original data format `annotations` have different keys depending on the `type` field = `singleAnswer` or `multipleQAs`. But this implementation uses an empty list `[]` for the unavailable keys please refer to Dataset Contents(https://github.com/shmsw25/AmbigQA#dataset-contents) for more details. ``` for example in train_light_dataset: for i,t in enumerate(example['annotations']['type']): if t =='singleAnswer': # use the example['annotations']['answer'][i] # example['annotations']['qaPairs'][i] - > is [] print(example['annotations']['answer'][i]) else: # use the example['annotations']['qaPairs'][i] # example['annotations']['answer'][i] - > is [] print(example['annotations']['qaPairs'][i]) ``` please refer to Dataset Contents(https://github.com/shmsw25/AmbigQA#dataset-contents) for more details. Light version only has `id`, `question`, `annotations` fields ### Data Splits - train: 10036 - validation: 2002 ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data - Wikipedia - NQ-open: ``` @article{ kwiatkowski2019natural, title={ Natural questions: a benchmark for question answering research}, author={ Kwiatkowski, Tom and Palomaki, Jennimaria and Redfield, Olivia and Collins, Michael and Parikh, Ankur and Alberti, Chris and Epstein, Danielle and Polosukhin, Illia and Devlin, Jacob and Lee, Kenton and others }, journal={ Transactions of the Association for Computational Linguistics }, year={ 2019 } } ``` #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [CC BY-SA 3.0](http://creativecommons.org/licenses/by-sa/3.0/) ### Citation Information ``` @inproceedings{ min2020ambigqa, title={ {A}mbig{QA}: Answering Ambiguous Open-domain Questions }, author={ Min, Sewon and Michael, Julian and Hajishirzi, Hannaneh and Zettlemoyer, Luke }, booktitle={ EMNLP }, year={2020} } ``` ### Contributions Thanks to [@cceyda](https://github.com/cceyda) for adding this dataset.
ambig_qa
[ "task_categories:question-answering", "task_ids:open-domain-qa", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|natural_questions", "source_datasets:original", "language:en", "license:cc-by-sa-3.0", "arxiv:2004.10645", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-sa-3.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|natural_questions", "original"], "task_categories": ["question-answering"], "task_ids": ["open-domain-qa"], "paperswithcode_id": "ambigqa", "pretty_name": "AmbigQA: Answering Ambiguous Open-domain Questions", "dataset_info": [{"config_name": "full", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "annotations", "sequence": [{"name": "type", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "qaPairs", "sequence": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}]}]}, {"name": "viewed_doc_titles", "sequence": "string"}, {"name": "used_queries", "sequence": [{"name": "query", "dtype": "string"}, {"name": "results", "sequence": [{"name": "title", "dtype": "string"}, {"name": "snippet", "dtype": "string"}]}]}, {"name": "nq_answer", "sequence": "string"}, {"name": "nq_doc_title", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 43538533, "num_examples": 10036}, {"name": "validation", "num_bytes": 15383268, "num_examples": 2002}], "download_size": 30674462, "dataset_size": 58921801}, {"config_name": "light", "features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "annotations", "sequence": [{"name": "type", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "qaPairs", "sequence": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}]}]}], "splits": [{"name": "train", "num_bytes": 2739628, "num_examples": 10036}, {"name": "validation", "num_bytes": 805756, "num_examples": 2002}], "download_size": 1777867, "dataset_size": 3545384}], "configs": [{"config_name": "full", "data_files": [{"split": "train", "path": "full/train-*"}, {"split": "validation", "path": "full/validation-*"}], "default": true}, {"config_name": "light", "data_files": [{"split": "train", "path": "light/train-*"}, {"split": "validation", "path": "light/validation-*"}]}]}
2024-01-09T12:27:07+00:00
[ "2004.10645" ]
[ "en" ]
TAGS #task_categories-question-answering #task_ids-open-domain-qa #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|natural_questions #source_datasets-original #language-English #license-cc-by-sa-3.0 #arxiv-2004.10645 #region-us
# Dataset Card for AmbigQA: Answering Ambiguous Open-domain Questions ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: - Repository: - Paper: ### Dataset Summary AmbigNQ, a dataset covering 14,042 questions from NQ-open, an existing open-domain QA benchmark. We find that over half of the questions in NQ-open are ambiguous. The types of ambiguity are diverse and sometimes subtle, many of which are only apparent after examining evidence provided by a very large text corpus. AMBIGNQ, a dataset with 14,042 annotations on NQ-OPEN questions containing diverse types of ambiguity. We provide two distributions of our new dataset AmbigNQ: a 'full' version with all annotation metadata and a 'light' version with only inputs and outputs. ### Supported Tasks and Leaderboards 'question-answering' ### Languages English ## Dataset Structure ### Data Instances An example from the data set looks as follows: ### Data Fields Full In the original data format 'annotations' have different keys depending on the 'type' field = 'singleAnswer' or 'multipleQAs'. But this implementation uses an empty list '[]' for the unavailable keys please refer to Dataset Contents(URL for more details. please refer to Dataset Contents(URL for more details. Light version only has 'id', 'question', 'annotations' fields ### Data Splits - train: 10036 - validation: 2002 ## Dataset Creation ### Curation Rationale ### Source Data - Wikipedia - NQ-open: #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information CC BY-SA 3.0 ### Contributions Thanks to @cceyda for adding this dataset.
[ "# Dataset Card for AmbigQA: Answering Ambiguous Open-domain Questions", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:", "### Dataset Summary\n\nAmbigNQ, a dataset covering 14,042 questions from NQ-open, an existing open-domain QA benchmark. We find that over half of the questions in NQ-open are ambiguous. The types of ambiguity are diverse and sometimes subtle, many of which are only apparent after examining evidence provided by a very large text corpus. AMBIGNQ, a dataset with\n14,042 annotations on NQ-OPEN questions containing diverse types of ambiguity.\nWe provide two distributions of our new dataset AmbigNQ: a 'full' version with all annotation metadata and a 'light' version with only inputs and outputs.", "### Supported Tasks and Leaderboards\n\n'question-answering'", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nAn example from the data set looks as follows:", "### Data Fields\n\nFull\n\nIn the original data format 'annotations' have different keys depending on the 'type' field = 'singleAnswer' or 'multipleQAs'. But this implementation uses an empty list '[]' for the unavailable keys \n\nplease refer to Dataset Contents(URL for more details.\n\n\n\nplease refer to Dataset Contents(URL for more details.\n\nLight version only has 'id', 'question', 'annotations' fields", "### Data Splits\n\n- train: 10036\n- validation: 2002", "## Dataset Creation", "### Curation Rationale", "### Source Data\n\n- Wikipedia\n- NQ-open:", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCC BY-SA 3.0", "### Contributions\n\nThanks to @cceyda for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_ids-open-domain-qa #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|natural_questions #source_datasets-original #language-English #license-cc-by-sa-3.0 #arxiv-2004.10645 #region-us \n", "# Dataset Card for AmbigQA: Answering Ambiguous Open-domain Questions", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:", "### Dataset Summary\n\nAmbigNQ, a dataset covering 14,042 questions from NQ-open, an existing open-domain QA benchmark. We find that over half of the questions in NQ-open are ambiguous. The types of ambiguity are diverse and sometimes subtle, many of which are only apparent after examining evidence provided by a very large text corpus. AMBIGNQ, a dataset with\n14,042 annotations on NQ-OPEN questions containing diverse types of ambiguity.\nWe provide two distributions of our new dataset AmbigNQ: a 'full' version with all annotation metadata and a 'light' version with only inputs and outputs.", "### Supported Tasks and Leaderboards\n\n'question-answering'", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nAn example from the data set looks as follows:", "### Data Fields\n\nFull\n\nIn the original data format 'annotations' have different keys depending on the 'type' field = 'singleAnswer' or 'multipleQAs'. But this implementation uses an empty list '[]' for the unavailable keys \n\nplease refer to Dataset Contents(URL for more details.\n\n\n\nplease refer to Dataset Contents(URL for more details.\n\nLight version only has 'id', 'question', 'annotations' fields", "### Data Splits\n\n- train: 10036\n- validation: 2002", "## Dataset Creation", "### Curation Rationale", "### Source Data\n\n- Wikipedia\n- NQ-open:", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCC BY-SA 3.0", "### Contributions\n\nThanks to @cceyda for adding this dataset." ]
[ 118, 20, 120, 15, 157, 17, 5, 6, 17, 110, 15, 5, 7, 12, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 11, 16 ]
[ "passage: TAGS\n#task_categories-question-answering #task_ids-open-domain-qa #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|natural_questions #source_datasets-original #language-English #license-cc-by-sa-3.0 #arxiv-2004.10645 #region-us \n# Dataset Card for AmbigQA: Answering Ambiguous Open-domain Questions## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:### Dataset Summary\n\nAmbigNQ, a dataset covering 14,042 questions from NQ-open, an existing open-domain QA benchmark. We find that over half of the questions in NQ-open are ambiguous. The types of ambiguity are diverse and sometimes subtle, many of which are only apparent after examining evidence provided by a very large text corpus. AMBIGNQ, a dataset with\n14,042 annotations on NQ-OPEN questions containing diverse types of ambiguity.\nWe provide two distributions of our new dataset AmbigNQ: a 'full' version with all annotation metadata and a 'light' version with only inputs and outputs.### Supported Tasks and Leaderboards\n\n'question-answering'### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nAn example from the data set looks as follows:" ]
1f3f4fa57acb59b2f352031de45ba08227d972c0
# Dataset Card for AmericasNLI ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** https://github.com/abteen/americasnli - **Repository:** https://github.com/nala-cub/AmericasNLI - **Paper:** https://arxiv.org/abs/2104.08726 - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary AmericasNLI is an extension of XNLI (Conneau et al., 2018) a natural language inference (NLI) dataset covering 15 high-resource languages to 10 low-resource indigenous languages spoken in the Americas: Ashaninka, Aymara, Bribri, Guarani, Nahuatl, Otomi, Quechua, Raramuri, Shipibo-Konibo, and Wixarika. As with MNLI, the goal is to predict textual entailment (does sentence A imply/contradict/neither sentence B) and is a classification task (given two sentences, predict one of three labels). ### Supported Tasks and Leaderboards [Needs More Information] ### Languages - aym - bzd - cni - gn - hch - nah - oto - quy - shp - tar ## Dataset Structure ### Data Instances #### all_languages An example of the test split looks as follows: ``` {'language': 'aym', 'premise': "Ukhamaxa, janiw ukatuqits lup'kayätti, ukhamarus wali phiñasitayätwa, ukatx jupampiw mayamp aruskipañ qallanttha.", 'hypothesis': 'Janiw mayamp jupampix p arlxapxti.', 'label': 2} ``` #### aym An example of the test split looks as follows: ``` {'premise': "Ukhamaxa, janiw ukatuqits lup'kayätti, ukhamarus wali phiñasitayätwa, ukatx jupampiw mayamp aruskipañ qallanttha.", 'hypothesis': 'Janiw mayamp jupampix parlxapxti.', 'label ': 2} ``` #### bzd An example of the test split looks as follows: ``` {'premise': "Bua', kèq ye' kũ e' bikeitsök erë ye' chkénãwã tã ye' ujtémĩne ie' tã páxlĩnẽ.", 'hypothesis': "Kèq ye' ùtẽnẽ ie' tã páxlĩ.", 'label': 2} ``` #### cni An example of the test split looks as follows: ``` {'premise': 'Kameetsa, tee nokenkeshireajeroji, iro kantaincha tee nomateroji aisati nintajaro noñanatajiri iroakera.', 'hypothesis': 'Tee noñatajeriji.', 'label': 2} ``` #### gn An example of the test split looks as follows: ``` {'premise': "Néi, ni napensaikurihína upéva rehe, ajepichaiterei ha añepyrûjey añe'ê hendive.", 'hypothesis': "Nañe'êvéi hendive.", 'label': 2} ``` #### hch An example of the test split looks as follows: ``` {'premise': 'mu hekwa.', 'hypothesis': 'neuka tita xatawe m+k+ mat+a.', 'label': 2} ``` #### nah An example of the test split looks as follows: ``` {'premise': 'Cualtitoc, na axnimoihliaya ino, nicualaniztoya queh naha nicamohuihqui', 'hypothesis': 'Ayoc nicamohuihtoc', 'label': 2} ``` #### oto An example of the test split looks as follows: ``` {'premise': 'mi-ga, nin mibⴘy mbô̮nitho ane guenu, guedi mibⴘy nho ⴘnmⴘy xi di mⴘdi o ñana nen nⴘua manaigui', 'hypothesis': 'hin din bi pengui nen nⴘa', 'label': 2} ``` #### quy An example of the test split looks as follows: ``` {'premise': 'Allinmi, manam chaypiqa hamutachkarqanichu, ichaqa manam allinchu tarikurqani chaymi kaqllamanta paywan rimarqani.', 'hypothesis': 'Manam paywanqa kaqllamantaqa rimarqani .', 'label': 2} ``` #### shp An example of the test split looks as follows: ``` {'premise': 'Jakon riki, ja shinanamara ea ike, ikaxbi kikin frustradara ea ike jakopira ea jabe yoyo iribake.', 'hypothesis': 'Eara jabe yoyo iribiama iki.', 'label': 2} ``` #### tar An example of the test split looks as follows: ``` {'premise': 'Ga’lá ju, ke tási newalayé nejé echi kítira, we ne majáli, a’lí ko uchécho ne yua ku ra’íchaki.', 'hypothesis': 'Tási ne uchecho yua ra’ícha échi rejói.', 'label': 2} ``` ### Data Fields #### all_languages - language: a multilingual string variable, with languages including ar, bg, de, el, en. - premise: a multilingual string variable, with languages including ar, bg, de, el, en. - hypothesis: a multilingual string variable, with possible languages including ar, bg, de, el, en. - label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2). #### aym - premise: a string feature. - hypothesis: a string feature. - label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2). #### bzd - premise: a string feature. - hypothesis: a string feature. - label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2). #### cni - premise: a string feature. - hypothesis: a string feature. - label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2). #### hch - premise: a string feature. - hypothesis: a string feature. - label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2). #### nah - premise: a string feature. - hypothesis: a string feature. - label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2). #### oto - premise: a string feature. - hypothesis: a string feature. - label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2). #### quy - premise: a string feature. - hypothesis: a string feature. - label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2). #### shp - premise: a string feature. - hypothesis: a string feature. - label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2). #### tar - premise: a string feature. - hypothesis: a string feature. - label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2). ### Data Splits | Language | ISO | Family | Dev | Test | |-------------------|-----|:-------------|-----:|-----:| | all_languages | -- | -- | 6457 | 7486 | | Aymara | aym | Aymaran | 743 | 750 | | Ashaninka | cni | Arawak | 658 | 750 | | Bribri | bzd | Chibchan | 743 | 750 | | Guarani | gn | Tupi-Guarani | 743 | 750 | | Nahuatl | nah | Uto-Aztecan | 376 | 738 | | Otomi | oto | Oto-Manguean | 222 | 748 | | Quechua | quy | Quechuan | 743 | 750 | | Raramuri | tar | Uto-Aztecan | 743 | 750 | | Shipibo-Konibo | shp | Panoan | 743 | 750 | | Wixarika | hch | Uto-Aztecan | 743 | 750 | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data The authors translate from the Spanish subset of XNLI. > AmericasNLI is the translation of a subset of XNLI (Conneau et al., 2018). As translators between Spanish and the target languages are more frequently available than those for English, we translate from the Spanish version. As per paragraph 3.1 of the [original paper](https://arxiv.org/abs/2104.08726). #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process The dataset comprises expert translations from Spanish XNLI. > Additionally, some translators reported that code-switching is often used to describe certain topics, and, while many words without an exact equivalence in the target language are worked in through translation or interpretation, others are kept in Spanish. To minimize the amount of Spanish vocabulary in the translated examples, we choose sentences from genres that we judged to be relatively easy to translate into the target languages: “face-to-face,” “letters,” and “telephone.” As per paragraph 3.1 of the [original paper](https://arxiv.org/abs/2104.08726). #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information Creative Commons Attribution Share Alike 4.0 International: https://github.com/abteen/americasnli/blob/main/LICENSE.md ### Citation Information ``` @inproceedings{ebrahimi-etal-2022-americasnli, title = "{A}mericas{NLI}: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages", author = "Ebrahimi, Abteen and Mager, Manuel and Oncevay, Arturo and Chaudhary, Vishrav and Chiruzzo, Luis and Fan, Angela and Ortega, John and Ramos, Ricardo and Rios, Annette and Meza Ruiz, Ivan Vladimir and Gim{\'e}nez-Lugo, Gustavo and Mager, Elisabeth and Neubig, Graham and Palmer, Alexis and Coto-Solano, Rolando and Vu, Thang and Kann, Katharina", booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.acl-long.435", pages = "6279--6299", abstract = "Pretrained multilingual models are able to perform cross-lingual transfer in a zero-shot setting, even for languages unseen during pretraining. However, prior work evaluating performance on unseen languages has largely been limited to low-level, syntactic tasks, and it remains unclear if zero-shot learning of high-level, semantic tasks is possible for unseen languages. To explore this question, we present AmericasNLI, an extension of XNLI (Conneau et al., 2018) to 10 Indigenous languages of the Americas. We conduct experiments with XLM-R, testing multiple zero-shot and translation-based approaches. Additionally, we explore model adaptation via continued pretraining and provide an analysis of the dataset by considering hypothesis-only models. We find that XLM-R{'}s zero-shot performance is poor for all 10 languages, with an average performance of 38.48{\%}. Continued pretraining offers improvements, with an average accuracy of 43.85{\%}. Surprisingly, training on poorly translated data by far outperforms all other methods with an accuracy of 49.12{\%}.", } ``` ### Contributions Thanks to [@fdschmidt93](https://github.com/fdschmidt93) for adding this dataset.
nala-cub/americas_nli
[ "task_categories:text-classification", "task_ids:natural-language-inference", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:multilingual", "multilinguality:translation", "size_categories:unknown", "source_datasets:extended|xnli", "language:ay", "language:bzd", "language:cni", "language:gn", "language:hch", "language:nah", "language:oto", "language:qu", "language:shp", "language:tar", "license:cc-by-sa-4.0", "arxiv:2104.08726", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["ay", "bzd", "cni", "gn", "hch", "nah", "oto", "qu", "shp", "tar"], "license": "cc-by-sa-4.0", "multilinguality": ["multilingual", "translation"], "size_categories": ["unknown"], "source_datasets": ["extended|xnli"], "task_categories": ["text-classification"], "task_ids": ["natural-language-inference"], "pretty_name": "AmericasNLI: A NLI Corpus of 10 Indigenous Low-Resource Languages.", "dataset_info": [{"config_name": "all_languages", "features": [{"name": "language", "dtype": "string"}, {"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "validation", "num_bytes": 1129080, "num_examples": 6457}, {"name": "test", "num_bytes": 1210579, "num_examples": 7486}], "download_size": 791239, "dataset_size": 2339659}, {"config_name": "aym", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "validation", "num_bytes": 117530, "num_examples": 743}, {"name": "test", "num_bytes": 115251, "num_examples": 750}], "download_size": 87882, "dataset_size": 232781}, {"config_name": "bzd", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "validation", "num_bytes": 143354, "num_examples": 743}, {"name": "test", "num_bytes": 127676, "num_examples": 750}], "download_size": 91039, "dataset_size": 271030}, {"config_name": "cni", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "validation", "num_bytes": 113256, "num_examples": 658}, {"name": "test", "num_bytes": 116284, "num_examples": 750}], "download_size": 78899, "dataset_size": 229540}, {"config_name": "gn", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "validation", "num_bytes": 115135, "num_examples": 743}, {"name": "test", "num_bytes": 101948, "num_examples": 750}], "download_size": 80429, "dataset_size": 217083}, {"config_name": "hch", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "validation", "num_bytes": 127966, "num_examples": 743}, {"name": "test", "num_bytes": 120857, "num_examples": 750}], "download_size": 90748, "dataset_size": 248823}, {"config_name": "nah", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "validation", "num_bytes": 50741, "num_examples": 376}, {"name": "test", "num_bytes": 102953, "num_examples": 738}], "download_size": 56953, "dataset_size": 153694}, {"config_name": "oto", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "validation", "num_bytes": 27010, "num_examples": 222}, {"name": "test", "num_bytes": 119650, "num_examples": 748}], "download_size": 57849, "dataset_size": 146660}, {"config_name": "quy", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "validation", "num_bytes": 125636, "num_examples": 743}, {"name": "test", "num_bytes": 112750, "num_examples": 750}], "download_size": 85673, "dataset_size": 238386}, {"config_name": "shp", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "validation", "num_bytes": 124500, "num_examples": 743}, {"name": "test", "num_bytes": 118934, "num_examples": 750}], "download_size": 85544, "dataset_size": 243434}, {"config_name": "tar", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}], "splits": [{"name": "validation", "num_bytes": 139496, "num_examples": 743}, {"name": "test", "num_bytes": 122624, "num_examples": 750}], "download_size": 89683, "dataset_size": 262120}], "configs": [{"config_name": "all_languages", "data_files": [{"split": "validation", "path": "all_languages/validation-*"}, {"split": "test", "path": "all_languages/test-*"}]}, {"config_name": "aym", "data_files": [{"split": "validation", "path": "aym/validation-*"}, {"split": "test", "path": "aym/test-*"}]}, {"config_name": "bzd", "data_files": [{"split": "validation", "path": "bzd/validation-*"}, {"split": "test", "path": "bzd/test-*"}]}, {"config_name": "cni", "data_files": [{"split": "validation", "path": "cni/validation-*"}, {"split": "test", "path": "cni/test-*"}]}, {"config_name": "gn", "data_files": [{"split": "validation", "path": "gn/validation-*"}, {"split": "test", "path": "gn/test-*"}]}, {"config_name": "hch", "data_files": [{"split": "validation", "path": "hch/validation-*"}, {"split": "test", "path": "hch/test-*"}]}, {"config_name": "nah", "data_files": [{"split": "validation", "path": "nah/validation-*"}, {"split": "test", "path": "nah/test-*"}]}, {"config_name": "oto", "data_files": [{"split": "validation", "path": "oto/validation-*"}, {"split": "test", "path": "oto/test-*"}]}, {"config_name": "quy", "data_files": [{"split": "validation", "path": "quy/validation-*"}, {"split": "test", "path": "quy/test-*"}]}, {"config_name": "shp", "data_files": [{"split": "validation", "path": "shp/validation-*"}, {"split": "test", "path": "shp/test-*"}]}, {"config_name": "tar", "data_files": [{"split": "validation", "path": "tar/validation-*"}, {"split": "test", "path": "tar/test-*"}]}]}
2024-01-23T09:18:27+00:00
[ "2104.08726" ]
[ "ay", "bzd", "cni", "gn", "hch", "nah", "oto", "qu", "shp", "tar" ]
TAGS #task_categories-text-classification #task_ids-natural-language-inference #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-multilingual #multilinguality-translation #size_categories-unknown #source_datasets-extended|xnli #language-Aymara #language-Bribri #language-Asháninka #language-Guarani #language-Huichol #language-nah #language-oto #language-Quechua #language-Shipibo-Conibo #language-Central Tarahumara #license-cc-by-sa-4.0 #arxiv-2104.08726 #region-us
Dataset Card for AmericasNLI ============================ Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information Dataset Description ------------------- * Homepage: * Repository: URL * Repository: URL * Paper: URL * Leaderboard: * Point of Contact: ### Dataset Summary AmericasNLI is an extension of XNLI (Conneau et al., 2018) a natural language inference (NLI) dataset covering 15 high-resource languages to 10 low-resource indigenous languages spoken in the Americas: Ashaninka, Aymara, Bribri, Guarani, Nahuatl, Otomi, Quechua, Raramuri, Shipibo-Konibo, and Wixarika. As with MNLI, the goal is to predict textual entailment (does sentence A imply/contradict/neither sentence B) and is a classification task (given two sentences, predict one of three labels). ### Supported Tasks and Leaderboards ### Languages * aym * bzd * cni * gn * hch * nah * oto * quy * shp * tar Dataset Structure ----------------- ### Data Instances #### all\_languages An example of the test split looks as follows: #### aym An example of the test split looks as follows: #### bzd An example of the test split looks as follows: #### cni An example of the test split looks as follows: #### gn An example of the test split looks as follows: #### hch An example of the test split looks as follows: #### nah An example of the test split looks as follows: #### oto An example of the test split looks as follows: #### quy An example of the test split looks as follows: #### shp An example of the test split looks as follows: #### tar An example of the test split looks as follows: ### Data Fields #### all\_languages ``` - language: a multilingual string variable, with languages including ar, bg, de, el, en. - premise: a multilingual string variable, with languages including ar, bg, de, el, en. - hypothesis: a multilingual string variable, with possible languages including ar, bg, de, el, en. - label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2). ``` #### aym ``` - premise: a string feature. - hypothesis: a string feature. - label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2). ``` #### bzd ``` - premise: a string feature. - hypothesis: a string feature. - label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2). ``` #### cni ``` - premise: a string feature. - hypothesis: a string feature. - label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2). ``` #### hch ``` - premise: a string feature. - hypothesis: a string feature. - label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2). ``` #### nah ``` - premise: a string feature. - hypothesis: a string feature. - label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2). ``` #### oto ``` - premise: a string feature. - hypothesis: a string feature. - label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2). ``` #### quy ``` - premise: a string feature. - hypothesis: a string feature. - label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2). ``` #### shp ``` - premise: a string feature. - hypothesis: a string feature. - label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2). ``` #### tar ``` - premise: a string feature. - hypothesis: a string feature. - label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2). ``` ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data The authors translate from the Spanish subset of XNLI. > > AmericasNLI is the translation of a subset of XNLI (Conneau et al., 2018). As translators between Spanish and the target languages are more frequently available than those for English, we translate from the Spanish version. > > > As per paragraph 3.1 of the original paper. #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process The dataset comprises expert translations from Spanish XNLI. > > Additionally, some translators reported that code-switching is often used to describe certain topics, and, while many words without an exact equivalence in the target language are worked in through translation or interpretation, others are kept in Spanish. To minimize the amount of Spanish vocabulary in the translated examples, we choose sentences from genres that we judged to be relatively easy to translate into the target languages: “face-to-face,” “letters,” and “telephone.” > > > As per paragraph 3.1 of the original paper. #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Creative Commons Attribution Share Alike 4.0 International: URL ### Contributions Thanks to @fdschmidt93 for adding this dataset.
[ "### Dataset Summary\n\n\nAmericasNLI is an extension of XNLI (Conneau et al., 2018) a natural language inference (NLI) dataset covering 15 high-resource languages to 10 low-resource indigenous languages spoken in the Americas: Ashaninka, Aymara, Bribri, Guarani, Nahuatl, Otomi, Quechua, Raramuri, Shipibo-Konibo, and Wixarika. As with MNLI, the goal is to predict textual entailment (does sentence A imply/contradict/neither sentence B) and is a classification task (given two sentences, predict one of three labels).", "### Supported Tasks and Leaderboards", "### Languages\n\n\n* aym\n* bzd\n* cni\n* gn\n* hch\n* nah\n* oto\n* quy\n* shp\n* tar\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### all\\_languages\n\n\nAn example of the test split looks as follows:", "#### aym\n\n\nAn example of the test split looks as follows:", "#### bzd\n\n\nAn example of the test split looks as follows:", "#### cni\n\n\nAn example of the test split looks as follows:", "#### gn\n\n\nAn example of the test split looks as follows:", "#### hch\n\n\nAn example of the test split looks as follows:", "#### nah\n\n\nAn example of the test split looks as follows:", "#### oto\n\n\nAn example of the test split looks as follows:", "#### quy\n\n\nAn example of the test split looks as follows:", "#### shp\n\n\nAn example of the test split looks as follows:", "#### tar\n\n\nAn example of the test split looks as follows:", "### Data Fields", "#### all\\_languages\n\n\n\n```\n- language: a multilingual string variable, with languages including ar, bg, de, el, en.\n- premise: a multilingual string variable, with languages including ar, bg, de, el, en.\n- hypothesis: a multilingual string variable, with possible languages including ar, bg, de, el, en.\n- label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2).\n\n```", "#### aym\n\n\n\n```\n- premise: a string feature.\n- hypothesis: a string feature.\n- label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2).\n\n```", "#### bzd\n\n\n\n```\n- premise: a string feature.\n- hypothesis: a string feature.\n- label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2).\n\n```", "#### cni\n\n\n\n```\n- premise: a string feature.\n- hypothesis: a string feature.\n- label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2).\n\n```", "#### hch\n\n\n\n```\n- premise: a string feature.\n- hypothesis: a string feature.\n- label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2).\n\n```", "#### nah\n\n\n\n```\n- premise: a string feature.\n- hypothesis: a string feature.\n- label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2).\n\n```", "#### oto\n\n\n\n```\n- premise: a string feature.\n- hypothesis: a string feature.\n- label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2).\n\n```", "#### quy\n\n\n\n```\n- premise: a string feature.\n- hypothesis: a string feature.\n- label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2).\n\n```", "#### shp\n\n\n\n```\n- premise: a string feature.\n- hypothesis: a string feature.\n- label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2).\n\n```", "#### tar\n\n\n\n```\n- premise: a string feature.\n- hypothesis: a string feature.\n- label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2).\n\n```", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data\n\n\nThe authors translate from the Spanish subset of XNLI.\n\n\n\n> \n> AmericasNLI is the translation of a subset of XNLI (Conneau et al., 2018). As translators between Spanish and the target languages are more frequently available than those for English, we translate from the Spanish version.\n> \n> \n> \n\n\nAs per paragraph 3.1 of the original paper.", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\n\nThe dataset comprises expert translations from Spanish XNLI.\n\n\n\n> \n> Additionally, some translators reported that code-switching is often used to describe certain topics, and, while many words without an exact equivalence in the target language are worked in through translation or interpretation, others are kept in Spanish. To minimize the amount of Spanish vocabulary in the translated examples, we choose sentences from genres that we judged to be relatively easy to translate into the target languages: “face-to-face,” “letters,” and “telephone.”\n> \n> \n> \n\n\nAs per paragraph 3.1 of the original paper.", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCreative Commons Attribution Share Alike 4.0 International: URL", "### Contributions\n\n\nThanks to @fdschmidt93 for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-natural-language-inference #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-multilingual #multilinguality-translation #size_categories-unknown #source_datasets-extended|xnli #language-Aymara #language-Bribri #language-Asháninka #language-Guarani #language-Huichol #language-nah #language-oto #language-Quechua #language-Shipibo-Conibo #language-Central Tarahumara #license-cc-by-sa-4.0 #arxiv-2104.08726 #region-us \n", "### Dataset Summary\n\n\nAmericasNLI is an extension of XNLI (Conneau et al., 2018) a natural language inference (NLI) dataset covering 15 high-resource languages to 10 low-resource indigenous languages spoken in the Americas: Ashaninka, Aymara, Bribri, Guarani, Nahuatl, Otomi, Quechua, Raramuri, Shipibo-Konibo, and Wixarika. As with MNLI, the goal is to predict textual entailment (does sentence A imply/contradict/neither sentence B) and is a classification task (given two sentences, predict one of three labels).", "### Supported Tasks and Leaderboards", "### Languages\n\n\n* aym\n* bzd\n* cni\n* gn\n* hch\n* nah\n* oto\n* quy\n* shp\n* tar\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### all\\_languages\n\n\nAn example of the test split looks as follows:", "#### aym\n\n\nAn example of the test split looks as follows:", "#### bzd\n\n\nAn example of the test split looks as follows:", "#### cni\n\n\nAn example of the test split looks as follows:", "#### gn\n\n\nAn example of the test split looks as follows:", "#### hch\n\n\nAn example of the test split looks as follows:", "#### nah\n\n\nAn example of the test split looks as follows:", "#### oto\n\n\nAn example of the test split looks as follows:", "#### quy\n\n\nAn example of the test split looks as follows:", "#### shp\n\n\nAn example of the test split looks as follows:", "#### tar\n\n\nAn example of the test split looks as follows:", "### Data Fields", "#### all\\_languages\n\n\n\n```\n- language: a multilingual string variable, with languages including ar, bg, de, el, en.\n- premise: a multilingual string variable, with languages including ar, bg, de, el, en.\n- hypothesis: a multilingual string variable, with possible languages including ar, bg, de, el, en.\n- label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2).\n\n```", "#### aym\n\n\n\n```\n- premise: a string feature.\n- hypothesis: a string feature.\n- label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2).\n\n```", "#### bzd\n\n\n\n```\n- premise: a string feature.\n- hypothesis: a string feature.\n- label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2).\n\n```", "#### cni\n\n\n\n```\n- premise: a string feature.\n- hypothesis: a string feature.\n- label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2).\n\n```", "#### hch\n\n\n\n```\n- premise: a string feature.\n- hypothesis: a string feature.\n- label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2).\n\n```", "#### nah\n\n\n\n```\n- premise: a string feature.\n- hypothesis: a string feature.\n- label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2).\n\n```", "#### oto\n\n\n\n```\n- premise: a string feature.\n- hypothesis: a string feature.\n- label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2).\n\n```", "#### quy\n\n\n\n```\n- premise: a string feature.\n- hypothesis: a string feature.\n- label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2).\n\n```", "#### shp\n\n\n\n```\n- premise: a string feature.\n- hypothesis: a string feature.\n- label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2).\n\n```", "#### tar\n\n\n\n```\n- premise: a string feature.\n- hypothesis: a string feature.\n- label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2).\n\n```", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data\n\n\nThe authors translate from the Spanish subset of XNLI.\n\n\n\n> \n> AmericasNLI is the translation of a subset of XNLI (Conneau et al., 2018). As translators between Spanish and the target languages are more frequently available than those for English, we translate from the Spanish version.\n> \n> \n> \n\n\nAs per paragraph 3.1 of the original paper.", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\n\nThe dataset comprises expert translations from Spanish XNLI.\n\n\n\n> \n> Additionally, some translators reported that code-switching is often used to describe certain topics, and, while many words without an exact equivalence in the target language are worked in through translation or interpretation, others are kept in Spanish. To minimize the amount of Spanish vocabulary in the translated examples, we choose sentences from genres that we judged to be relatively easy to translate into the target languages: “face-to-face,” “letters,” and “telephone.”\n> \n> \n> \n\n\nAs per paragraph 3.1 of the original paper.", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCreative Commons Attribution Share Alike 4.0 International: URL", "### Contributions\n\n\nThanks to @fdschmidt93 for adding this dataset." ]
[ 172, 156, 10, 36, 6, 18, 15, 15, 15, 15, 15, 14, 14, 14, 14, 14, 5, 114, 49, 49, 49, 49, 48, 48, 48, 48, 48, 11, 7, 86, 10, 10, 5, 146, 9, 18, 7, 8, 14, 6, 16, 20 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-natural-language-inference #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-multilingual #multilinguality-translation #size_categories-unknown #source_datasets-extended|xnli #language-Aymara #language-Bribri #language-Asháninka #language-Guarani #language-Huichol #language-nah #language-oto #language-Quechua #language-Shipibo-Conibo #language-Central Tarahumara #license-cc-by-sa-4.0 #arxiv-2104.08726 #region-us \n### Dataset Summary\n\n\nAmericasNLI is an extension of XNLI (Conneau et al., 2018) a natural language inference (NLI) dataset covering 15 high-resource languages to 10 low-resource indigenous languages spoken in the Americas: Ashaninka, Aymara, Bribri, Guarani, Nahuatl, Otomi, Quechua, Raramuri, Shipibo-Konibo, and Wixarika. As with MNLI, the goal is to predict textual entailment (does sentence A imply/contradict/neither sentence B) and is a classification task (given two sentences, predict one of three labels).### Supported Tasks and Leaderboards### Languages\n\n\n* aym\n* bzd\n* cni\n* gn\n* hch\n* nah\n* oto\n* quy\n* shp\n* tar\n\n\nDataset Structure\n-----------------### Data Instances#### all\\_languages\n\n\nAn example of the test split looks as follows:#### aym\n\n\nAn example of the test split looks as follows:#### bzd\n\n\nAn example of the test split looks as follows:#### cni\n\n\nAn example of the test split looks as follows:#### gn\n\n\nAn example of the test split looks as follows:#### hch\n\n\nAn example of the test split looks as follows:#### nah\n\n\nAn example of the test split looks as follows:#### oto\n\n\nAn example of the test split looks as follows:", "passage: #### quy\n\n\nAn example of the test split looks as follows:#### shp\n\n\nAn example of the test split looks as follows:#### tar\n\n\nAn example of the test split looks as follows:### Data Fields#### all\\_languages\n\n\n\n```\n- language: a multilingual string variable, with languages including ar, bg, de, el, en.\n- premise: a multilingual string variable, with languages including ar, bg, de, el, en.\n- hypothesis: a multilingual string variable, with possible languages including ar, bg, de, el, en.\n- label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2).\n\n```#### aym\n\n\n\n```\n- premise: a string feature.\n- hypothesis: a string feature.\n- label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2).\n\n```#### bzd\n\n\n\n```\n- premise: a string feature.\n- hypothesis: a string feature.\n- label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2).\n\n```#### cni\n\n\n\n```\n- premise: a string feature.\n- hypothesis: a string feature.\n- label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2).\n\n```#### hch\n\n\n\n```\n- premise: a string feature.\n- hypothesis: a string feature.\n- label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2).\n\n```#### nah\n\n\n\n```\n- premise: a string feature.\n- hypothesis: a string feature.\n- label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2).\n\n```#### oto\n\n\n\n```\n- premise: a string feature.\n- hypothesis: a string feature.\n- label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2).\n\n```#### quy\n\n\n\n```\n- premise: a string feature.\n- hypothesis: a string feature.\n- label: a classification label, with possible values including entailment (0), neutral (1), contradiction (2).\n\n```" ]
81c6507a5cead40db13e77610fdcdf5c0f6261e4
# Dataset Card for AMI Corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Preprocessing](#dataset-preprocessing) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"> <p><b>Deprecated:</b> This legacy dataset is outdated. Please, use <a href="https://huggingface.co/datasets/edinburghcstr/ami"> edinburghcstr/ami </a> instead.</p> </div> ## Dataset Description - **Homepage:** [AMI corpus](https://groups.inf.ed.ac.uk/ami/corpus/) - **Repository:** [Needs More Information] - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary The AMI Meeting Corpus consists of 100 hours of meeting recordings. The recordings use a range of signals synchronized to a common timeline. These include close-talking and far-field microphones, individual and room-view video cameras, and output from a slide projector and an electronic whiteboard. During the meetings, the participants also have unsynchronized pens available to them that record what is written. The meetings were recorded in English using three different rooms with different acoustic properties, and include mostly non-native speakers. ### Dataset Preprocessing Individual samples of the AMI dataset contain very large audio files (between 10 and 60 minutes). Such lengths are unfeasible for most speech recognition models. In the following, we show how the dataset can effectively be chunked into multiple segments as defined by the dataset creators. The following function cuts the long audio files into the defined segment lengths: ```python import librosa import math from datasets import load_dataset SAMPLE_RATE = 16_000 def chunk_audio(batch): new_batch = { "audio": [], "words": [], "speaker": [], "lengths": [], "word_start_times": [], "segment_start_times": [], } audio, _ = librosa.load(batch["file"][0], sr=SAMPLE_RATE) word_idx = 0 num_words = len(batch["words"][0]) for segment_idx in range(len(batch["segment_start_times"][0])): words = [] word_start_times = [] start_time = batch["segment_start_times"][0][segment_idx] end_time = batch["segment_end_times"][0][segment_idx] # go back and forth with word_idx since segments overlap with each other while (word_idx > 1) and (start_time < batch["word_end_times"][0][word_idx - 1]): word_idx -= 1 while word_idx < num_words and (start_time > batch["word_start_times"][0][word_idx]): word_idx += 1 new_batch["audio"].append(audio[int(start_time * SAMPLE_RATE): int(end_time * SAMPLE_RATE)]) while word_idx < num_words and batch["word_start_times"][0][word_idx] < end_time: words.append(batch["words"][0][word_idx]) word_start_times.append(batch["word_start_times"][0][word_idx]) word_idx += 1 new_batch["lengths"].append(end_time - start_time) new_batch["words"].append(words) new_batch["speaker"].append(batch["segment_speakers"][0][segment_idx]) new_batch["word_start_times"].append(word_start_times) new_batch["segment_start_times"].append(batch["segment_start_times"][0][segment_idx]) return new_batch ami = load_dataset("ami", "headset-single") ami = ami.map(chunk_audio, batched=True, batch_size=1, remove_columns=ami["train"].column_names) ``` The segmented audio files can still be as long as a minute. To further chunk the data into shorter audio chunks, you can use the following script. ```python MAX_LENGTH_IN_SECONDS = 20.0 def chunk_into_max_n_seconds(batch): new_batch = { "audio": [], "text": [], } sample_length = batch["lengths"][0] segment_start = batch["segment_start_times"][0] if sample_length > MAX_LENGTH_IN_SECONDS: num_chunks_per_sample = math.ceil(sample_length / MAX_LENGTH_IN_SECONDS) avg_chunk_length = sample_length / num_chunks_per_sample num_words = len(batch["words"][0]) # start chunking by times start_word_idx = end_word_idx = 0 chunk_start_time = 0 for n in range(num_chunks_per_sample): while (end_word_idx < num_words - 1) and (batch["word_start_times"][0][end_word_idx] < segment_start + (n + 1) * avg_chunk_length): end_word_idx += 1 chunk_end_time = int((batch["word_start_times"][0][end_word_idx] - segment_start) * SAMPLE_RATE) new_batch["audio"].append(batch["audio"][0][chunk_start_time: chunk_end_time]) new_batch["text"].append(" ".join(batch["words"][0][start_word_idx: end_word_idx])) chunk_start_time = chunk_end_time start_word_idx = end_word_idx else: new_batch["audio"].append(batch["audio"][0]) new_batch["text"].append(" ".join(batch["words"][0])) return new_batch ami = ami.map(chunk_into_max_n_seconds, batched=True, batch_size=1, remove_columns=ami["train"].column_names, num_proc=64) ``` A segmented and chunked dataset of the config `"headset-single"`can be found [here](https://huggingface.co/datasets/ami-wav2vec2/ami_single_headset_segmented_and_chunked). ### Supported Tasks and Leaderboards - `automatic-speech-recognition`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task does not have an active leaderboard at the moment. - `speaker-diarization`: The dataset can be used to train model for Speaker Diarization (SD). The model is presented with an audio file and asked to predict which speaker spoke at what time. ### Languages The audio is in English. ## Dataset Structure ### Data Instances A typical data point comprises the path to the audio file (or files in the case of the multi-headset or multi-microphone dataset), called `file` and its transcription as a list of words, called `words`. Additional information about the `speakers`, the `word_start_time`, `word_end_time`, `segment_start_time`, `segment_end_time` is given. In addition and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided. ``` {'word_ids': ["ES2004a.D.words1", "ES2004a.D.words2", ...], 'word_start_times': [0.3700000047683716, 0.949999988079071, ...], 'word_end_times': [0.949999988079071, 1.5299999713897705, ...], 'word_speakers': ['A', 'A', ...], 'segment_ids': ["ES2004a.sync.1", "ES2004a.sync.2", ...] 'segment_start_times': [10.944000244140625, 17.618999481201172, ...], 'segment_end_times': [17.618999481201172, 18.722000122070312, ...], 'segment_speakers': ['A', 'B', ...], 'words', ["hmm", "hmm", ...] 'channels': [0, 0, ..], 'file': "/.cache/huggingface/datasets/downloads/af7e748544004557b35eef8b0522d4fb2c71e004b82ba8b7343913a15def465f" 'audio': {'path': "/.cache/huggingface/datasets/downloads/af7e748544004557b35eef8b0522d4fb2c71e004b82ba8b7343913a15def465f", 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 16000}, } ``` ### Data Fields - word_ids: a list of the ids of the words - word_start_times: a list of the start times of when the words were spoken in seconds - word_end_times: a list of the end times of when the words were spoken in seconds - word_speakers: a list of speakers one for each word - segment_ids: a list of the ids of the segments - segment_start_times: a list of the start times of when the segments start - segment_end_times: a list of the start times of when the segments ends - segment_speakers: a list of speakers one for each segment - words: a list of all the spoken words - channels: a list of all channels that were used for each word - file: a path to the audio file - audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. ### Data Splits The dataset consists of several configurations, each one having train/validation/test splits: - headset-single: Close talking audio of single headset. This configuration only includes audio belonging to the headset of the person currently speaking. - headset-multi (4 channels): Close talking audio of four individual headset. This configuration includes audio belonging to four individual headsets. For each annotation there are 4 audio files 0, 1, 2, 3. - microphone-single: Far field audio of single microphone. This configuration only includes audio belonging the first microphone, *i.e.* 1-1, of the microphone array. - microphone-multi (8 channels): Far field audio of microphone array. This configuration includes audio of the first microphone array 1-1, 1-2, ..., 1-8. In general, `headset-single` and `headset-multi` include significantly less noise than `microphone-single` and `microphone-multi`. | | Train | Valid | Test | | ----- | ------ | ----- | ---- | | headset-single | 136 (80h) | 18 (9h) | 16 (9h) | | headset-multi (4 channels) | 136 (320h) | 18 (36h) | 16 (36h) | | microphone-single | 136 (80h) | 18 (9h) | 16 (9h) | | microphone-multi (8 channels) | 136 (640h) | 18 (72h) | 16 (72h) | Note that each sample contains between 10 and 60 minutes of audio data which makes it impractical for direct transcription. One should make use of the segment and word start times and end times to chunk the samples into smaller samples of manageable size. ## Dataset Creation All information about the dataset creation can be found [here](https://groups.inf.ed.ac.uk/ami/corpus/overview.shtml) ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information CC BY 4.0 ### Citation Information #### TODO ### Contributions Thanks to [@cahya-wirawan](https://github.com/cahya-wirawan) and [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. #### TODO
ami
[ "task_categories:automatic-speech-recognition", "annotations_creators:expert-generated", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["automatic-speech-recognition"], "task_ids": [], "pretty_name": "AMI Corpus", "dataset_info": [{"config_name": "microphone-single", "features": [{"name": "word_ids", "sequence": "string"}, {"name": "word_start_times", "sequence": "float32"}, {"name": "word_end_times", "sequence": "float32"}, {"name": "word_speakers", "sequence": "string"}, {"name": "segment_ids", "sequence": "string"}, {"name": "segment_start_times", "sequence": "float32"}, {"name": "segment_end_times", "sequence": "float32"}, {"name": "segment_speakers", "sequence": "string"}, {"name": "words", "sequence": "string"}, {"name": "channels", "sequence": "string"}, {"name": "file", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 42013753, "num_examples": 134}, {"name": "validation", "num_bytes": 5110497, "num_examples": 18}, {"name": "test", "num_bytes": 4821283, "num_examples": 16}], "download_size": 11387715153, "dataset_size": 51945533}, {"config_name": "microphone-multi", "features": [{"name": "word_ids", "sequence": "string"}, {"name": "word_start_times", "sequence": "float32"}, {"name": "word_end_times", "sequence": "float32"}, {"name": "word_speakers", "sequence": "string"}, {"name": "segment_ids", "sequence": "string"}, {"name": "segment_start_times", "sequence": "float32"}, {"name": "segment_end_times", "sequence": "float32"}, {"name": "segment_speakers", "sequence": "string"}, {"name": "words", "sequence": "string"}, {"name": "channels", "sequence": "string"}, {"name": "file-1-1", "dtype": "string"}, {"name": "file-1-2", "dtype": "string"}, {"name": "file-1-3", "dtype": "string"}, {"name": "file-1-4", "dtype": "string"}, {"name": "file-1-5", "dtype": "string"}, {"name": "file-1-6", "dtype": "string"}, {"name": "file-1-7", "dtype": "string"}, {"name": "file-1-8", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 42126341, "num_examples": 134}, {"name": "validation", "num_bytes": 5125645, "num_examples": 18}, {"name": "test", "num_bytes": 4834751, "num_examples": 16}], "download_size": 90941506169, "dataset_size": 52086737}, {"config_name": "headset-single", "features": [{"name": "word_ids", "sequence": "string"}, {"name": "word_start_times", "sequence": "float32"}, {"name": "word_end_times", "sequence": "float32"}, {"name": "word_speakers", "sequence": "string"}, {"name": "segment_ids", "sequence": "string"}, {"name": "segment_start_times", "sequence": "float32"}, {"name": "segment_end_times", "sequence": "float32"}, {"name": "segment_speakers", "sequence": "string"}, {"name": "words", "sequence": "string"}, {"name": "channels", "sequence": "string"}, {"name": "file", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 42491091, "num_examples": 136}, {"name": "validation", "num_bytes": 5110497, "num_examples": 18}, {"name": "test", "num_bytes": 4821283, "num_examples": 16}], "download_size": 11505070978, "dataset_size": 52422871}, {"config_name": "headset-multi", "features": [{"name": "word_ids", "sequence": "string"}, {"name": "word_start_times", "sequence": "float32"}, {"name": "word_end_times", "sequence": "float32"}, {"name": "word_speakers", "sequence": "string"}, {"name": "segment_ids", "sequence": "string"}, {"name": "segment_start_times", "sequence": "float32"}, {"name": "segment_end_times", "sequence": "float32"}, {"name": "segment_speakers", "sequence": "string"}, {"name": "words", "sequence": "string"}, {"name": "channels", "sequence": "string"}, {"name": "file-0", "dtype": "string"}, {"name": "file-1", "dtype": "string"}, {"name": "file-2", "dtype": "string"}, {"name": "file-3", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 42540063, "num_examples": 136}, {"name": "validation", "num_bytes": 5116989, "num_examples": 18}, {"name": "test", "num_bytes": 4827055, "num_examples": 16}], "download_size": 45951596391, "dataset_size": 52484107}]}
2024-01-18T11:01:45+00:00
[]
[ "en" ]
TAGS #task_categories-automatic-speech-recognition #annotations_creators-expert-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #region-us
Dataset Card for AMI Corpus =========================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Dataset Preprocessing + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions **Deprecated:** This legacy dataset is outdated. Please, use <a href="URL edinburghcstr/ami </a> instead. Dataset Description ------------------- * Homepage: AMI corpus * Repository: * Paper: * Leaderboard: * Point of Contact: ### Dataset Summary The AMI Meeting Corpus consists of 100 hours of meeting recordings. The recordings use a range of signals synchronized to a common timeline. These include close-talking and far-field microphones, individual and room-view video cameras, and output from a slide projector and an electronic whiteboard. During the meetings, the participants also have unsynchronized pens available to them that record what is written. The meetings were recorded in English using three different rooms with different acoustic properties, and include mostly non-native speakers. ### Dataset Preprocessing Individual samples of the AMI dataset contain very large audio files (between 10 and 60 minutes). Such lengths are unfeasible for most speech recognition models. In the following, we show how the dataset can effectively be chunked into multiple segments as defined by the dataset creators. The following function cuts the long audio files into the defined segment lengths: The segmented audio files can still be as long as a minute. To further chunk the data into shorter audio chunks, you can use the following script. A segmented and chunked dataset of the config '"headset-single"'can be found here. ### Supported Tasks and Leaderboards * 'automatic-speech-recognition': The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task does not have an active leaderboard at the moment. * 'speaker-diarization': The dataset can be used to train model for Speaker Diarization (SD). The model is presented with an audio file and asked to predict which speaker spoke at what time. ### Languages The audio is in English. Dataset Structure ----------------- ### Data Instances A typical data point comprises the path to the audio file (or files in the case of the multi-headset or multi-microphone dataset), called 'file' and its transcription as a list of words, called 'words'. Additional information about the 'speakers', the 'word\_start\_time', 'word\_end\_time', 'segment\_start\_time', 'segment\_end\_time' is given. In addition and its transcription, called 'text'. Some additional information about the speaker and the passage which contains the transcription is provided. ### Data Fields * word\_ids: a list of the ids of the words * word\_start\_times: a list of the start times of when the words were spoken in seconds * word\_end\_times: a list of the end times of when the words were spoken in seconds * word\_speakers: a list of speakers one for each word * segment\_ids: a list of the ids of the segments * segment\_start\_times: a list of the start times of when the segments start * segment\_end\_times: a list of the start times of when the segments ends * segment\_speakers: a list of speakers one for each segment * words: a list of all the spoken words * channels: a list of all channels that were used for each word * file: a path to the audio file * audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0]["audio"]' the audio file is automatically decoded and resampled to 'dataset.features["audio"].sampling\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '"audio"' column, *i.e.* 'dataset[0]["audio"]' should always be preferred over 'dataset["audio"][0]'. ### Data Splits The dataset consists of several configurations, each one having train/validation/test splits: * headset-single: Close talking audio of single headset. This configuration only includes audio belonging to the headset of the person currently speaking. * headset-multi (4 channels): Close talking audio of four individual headset. This configuration includes audio belonging to four individual headsets. For each annotation there are 4 audio files 0, 1, 2, 3. * microphone-single: Far field audio of single microphone. This configuration only includes audio belonging the first microphone, *i.e.* 1-1, of the microphone array. * microphone-multi (8 channels): Far field audio of microphone array. This configuration includes audio of the first microphone array 1-1, 1-2, ..., 1-8. In general, 'headset-single' and 'headset-multi' include significantly less noise than 'microphone-single' and 'microphone-multi'. Note that each sample contains between 10 and 60 minutes of audio data which makes it impractical for direct transcription. One should make use of the segment and word start times and end times to chunk the samples into smaller samples of manageable size. Dataset Creation ---------------- All information about the dataset creation can be found here ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information CC BY 4.0 #### TODO ### Contributions Thanks to @cahya-wirawan and @patrickvonplaten for adding this dataset. #### TODO
[ "### Dataset Summary\n\n\nThe AMI Meeting Corpus consists of 100 hours of meeting recordings. The recordings use a range of signals\nsynchronized to a common timeline. These include close-talking and far-field microphones, individual and\nroom-view video cameras, and output from a slide projector and an electronic whiteboard. During the meetings,\nthe participants also have unsynchronized pens available to them that record what is written. The meetings\nwere recorded in English using three different rooms with different acoustic properties, and include mostly\nnon-native speakers.", "### Dataset Preprocessing\n\n\nIndividual samples of the AMI dataset contain very large audio files (between 10 and 60 minutes).\nSuch lengths are unfeasible for most speech recognition models. In the following, we show how the\ndataset can effectively be chunked into multiple segments as defined by the dataset creators.\n\n\nThe following function cuts the long audio files into the defined segment lengths:\n\n\nThe segmented audio files can still be as long as a minute. To further chunk the data into shorter\naudio chunks, you can use the following script.\n\n\nA segmented and chunked dataset of the config '\"headset-single\"'can be found here.", "### Supported Tasks and Leaderboards\n\n\n* 'automatic-speech-recognition': The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task does not have an active leaderboard at the moment.\n* 'speaker-diarization': The dataset can be used to train model for Speaker Diarization (SD). The model is presented with an audio file and asked to predict which speaker spoke at what time.", "### Languages\n\n\nThe audio is in English.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical data point comprises the path to the audio file (or files in the case of\nthe multi-headset or multi-microphone dataset), called 'file' and its transcription as\na list of words, called 'words'. Additional information about the 'speakers', the 'word\\_start\\_time', 'word\\_end\\_time', 'segment\\_start\\_time', 'segment\\_end\\_time' is given.\nIn addition\n\n\nand its transcription, called 'text'. Some additional information about the speaker and the passage which contains the transcription is provided.", "### Data Fields\n\n\n* word\\_ids: a list of the ids of the words\n* word\\_start\\_times: a list of the start times of when the words were spoken in seconds\n* word\\_end\\_times: a list of the end times of when the words were spoken in seconds\n* word\\_speakers: a list of speakers one for each word\n* segment\\_ids: a list of the ids of the segments\n* segment\\_start\\_times: a list of the start times of when the segments start\n* segment\\_end\\_times: a list of the start times of when the segments ends\n* segment\\_speakers: a list of speakers one for each segment\n* words: a list of all the spoken words\n* channels: a list of all channels that were used for each word\n* file: a path to the audio file\n* audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling\\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.", "### Data Splits\n\n\nThe dataset consists of several configurations, each one having train/validation/test splits:\n\n\n* headset-single: Close talking audio of single headset. This configuration only includes audio belonging to the headset of the person currently speaking.\n* headset-multi (4 channels): Close talking audio of four individual headset. This configuration includes audio belonging to four individual headsets. For each annotation there are 4 audio files 0, 1, 2, 3.\n* microphone-single: Far field audio of single microphone. This configuration only includes audio belonging the first microphone, *i.e.* 1-1, of the microphone array.\n* microphone-multi (8 channels): Far field audio of microphone array. This configuration includes audio of the first microphone array 1-1, 1-2, ..., 1-8.\n\n\nIn general, 'headset-single' and 'headset-multi' include significantly less noise than\n'microphone-single' and 'microphone-multi'.\n\n\n\nNote that each sample contains between 10 and 60 minutes of audio data which makes it\nimpractical for direct transcription. One should make use of the segment and word start times and end times to chunk the samples into smaller samples of manageable size.\n\n\nDataset Creation\n----------------\n\n\nAll information about the dataset creation can be found\nhere", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCC BY 4.0", "#### TODO", "### Contributions\n\n\nThanks to @cahya-wirawan and @patrickvonplaten for adding this dataset.", "#### TODO" ]
[ "TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-expert-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n", "### Dataset Summary\n\n\nThe AMI Meeting Corpus consists of 100 hours of meeting recordings. The recordings use a range of signals\nsynchronized to a common timeline. These include close-talking and far-field microphones, individual and\nroom-view video cameras, and output from a slide projector and an electronic whiteboard. During the meetings,\nthe participants also have unsynchronized pens available to them that record what is written. The meetings\nwere recorded in English using three different rooms with different acoustic properties, and include mostly\nnon-native speakers.", "### Dataset Preprocessing\n\n\nIndividual samples of the AMI dataset contain very large audio files (between 10 and 60 minutes).\nSuch lengths are unfeasible for most speech recognition models. In the following, we show how the\ndataset can effectively be chunked into multiple segments as defined by the dataset creators.\n\n\nThe following function cuts the long audio files into the defined segment lengths:\n\n\nThe segmented audio files can still be as long as a minute. To further chunk the data into shorter\naudio chunks, you can use the following script.\n\n\nA segmented and chunked dataset of the config '\"headset-single\"'can be found here.", "### Supported Tasks and Leaderboards\n\n\n* 'automatic-speech-recognition': The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task does not have an active leaderboard at the moment.\n* 'speaker-diarization': The dataset can be used to train model for Speaker Diarization (SD). The model is presented with an audio file and asked to predict which speaker spoke at what time.", "### Languages\n\n\nThe audio is in English.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical data point comprises the path to the audio file (or files in the case of\nthe multi-headset or multi-microphone dataset), called 'file' and its transcription as\na list of words, called 'words'. Additional information about the 'speakers', the 'word\\_start\\_time', 'word\\_end\\_time', 'segment\\_start\\_time', 'segment\\_end\\_time' is given.\nIn addition\n\n\nand its transcription, called 'text'. Some additional information about the speaker and the passage which contains the transcription is provided.", "### Data Fields\n\n\n* word\\_ids: a list of the ids of the words\n* word\\_start\\_times: a list of the start times of when the words were spoken in seconds\n* word\\_end\\_times: a list of the end times of when the words were spoken in seconds\n* word\\_speakers: a list of speakers one for each word\n* segment\\_ids: a list of the ids of the segments\n* segment\\_start\\_times: a list of the start times of when the segments start\n* segment\\_end\\_times: a list of the start times of when the segments ends\n* segment\\_speakers: a list of speakers one for each segment\n* words: a list of all the spoken words\n* channels: a list of all channels that were used for each word\n* file: a path to the audio file\n* audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling\\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.", "### Data Splits\n\n\nThe dataset consists of several configurations, each one having train/validation/test splits:\n\n\n* headset-single: Close talking audio of single headset. This configuration only includes audio belonging to the headset of the person currently speaking.\n* headset-multi (4 channels): Close talking audio of four individual headset. This configuration includes audio belonging to four individual headsets. For each annotation there are 4 audio files 0, 1, 2, 3.\n* microphone-single: Far field audio of single microphone. This configuration only includes audio belonging the first microphone, *i.e.* 1-1, of the microphone array.\n* microphone-multi (8 channels): Far field audio of microphone array. This configuration includes audio of the first microphone array 1-1, 1-2, ..., 1-8.\n\n\nIn general, 'headset-single' and 'headset-multi' include significantly less noise than\n'microphone-single' and 'microphone-multi'.\n\n\n\nNote that each sample contains between 10 and 60 minutes of audio data which makes it\nimpractical for direct transcription. One should make use of the segment and word start times and end times to chunk the samples into smaller samples of manageable size.\n\n\nDataset Creation\n----------------\n\n\nAll information about the dataset creation can be found\nhere", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCC BY 4.0", "#### TODO", "### Contributions\n\n\nThanks to @cahya-wirawan and @patrickvonplaten for adding this dataset.", "#### TODO" ]
[ 98, 126, 150, 140, 17, 145, 370, 296, 7, 4, 10, 10, 5, 5, 9, 50, 7, 8, 14, 6, 9, 4, 26, 4 ]
[ "passage: TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-expert-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n### Dataset Summary\n\n\nThe AMI Meeting Corpus consists of 100 hours of meeting recordings. The recordings use a range of signals\nsynchronized to a common timeline. These include close-talking and far-field microphones, individual and\nroom-view video cameras, and output from a slide projector and an electronic whiteboard. During the meetings,\nthe participants also have unsynchronized pens available to them that record what is written. The meetings\nwere recorded in English using three different rooms with different acoustic properties, and include mostly\nnon-native speakers.### Dataset Preprocessing\n\n\nIndividual samples of the AMI dataset contain very large audio files (between 10 and 60 minutes).\nSuch lengths are unfeasible for most speech recognition models. In the following, we show how the\ndataset can effectively be chunked into multiple segments as defined by the dataset creators.\n\n\nThe following function cuts the long audio files into the defined segment lengths:\n\n\nThe segmented audio files can still be as long as a minute. To further chunk the data into shorter\naudio chunks, you can use the following script.\n\n\nA segmented and chunked dataset of the config '\"headset-single\"'can be found here.", "passage: ### Supported Tasks and Leaderboards\n\n\n* 'automatic-speech-recognition': The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task does not have an active leaderboard at the moment.\n* 'speaker-diarization': The dataset can be used to train model for Speaker Diarization (SD). The model is presented with an audio file and asked to predict which speaker spoke at what time.### Languages\n\n\nThe audio is in English.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nA typical data point comprises the path to the audio file (or files in the case of\nthe multi-headset or multi-microphone dataset), called 'file' and its transcription as\na list of words, called 'words'. Additional information about the 'speakers', the 'word\\_start\\_time', 'word\\_end\\_time', 'segment\\_start\\_time', 'segment\\_end\\_time' is given.\nIn addition\n\n\nand its transcription, called 'text'. Some additional information about the speaker and the passage which contains the transcription is provided." ]
271a5aa99e75e936e334b3c52ec178f08bced629
# Dataset Card for AMTTL ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Github](https://github.com/adapt-sjtu/AMTTL/tree/master/medical_data) - **Repository:** [Github](https://github.com/adapt-sjtu/AMTTL/tree/master/medical_data) - **Paper:** [Aclweb](http://aclweb.org/anthology/C18-1307) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ```bibtex @inproceedings{xing2018adaptive, title={Adaptive multi-task transfer learning for Chinese word segmentation in medical text}, author={Xing, Junjie and Zhu, Kenny and Zhang, Shaodian}, booktitle={Proceedings of the 27th International Conference on Computational Linguistics}, pages={3619--3630}, year={2018} } ``` ### Contributions Thanks to [@JetRunner](https://github.com/JetRunner) for adding this dataset.
amttl
[ "task_categories:token-classification", "task_ids:parsing", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:zh", "license:mit", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["zh"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["parsing"], "pretty_name": "AMTTL", "dataset_info": {"config_name": "amttl", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "tags", "sequence": {"class_label": {"names": {"0": "B", "1": "I", "2": "E", "3": "S"}}}}], "splits": [{"name": "train", "num_bytes": 1132196, "num_examples": 3063}, {"name": "validation", "num_bytes": 324358, "num_examples": 822}, {"name": "test", "num_bytes": 328509, "num_examples": 908}], "download_size": 274351, "dataset_size": 1785063}, "configs": [{"config_name": "amttl", "data_files": [{"split": "train", "path": "amttl/train-*"}, {"split": "validation", "path": "amttl/validation-*"}, {"split": "test", "path": "amttl/test-*"}], "default": true}]}
2024-01-09T12:28:18+00:00
[]
[ "zh" ]
TAGS #task_categories-token-classification #task_ids-parsing #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Chinese #license-mit #region-us
# Dataset Card for AMTTL ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: Github - Repository: Github - Paper: Aclweb - Leaderboard: - Point of Contact: ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @JetRunner for adding this dataset.
[ "# Dataset Card for AMTTL", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Github\n- Repository: Github\n- Paper: Aclweb\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @JetRunner for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #task_ids-parsing #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Chinese #license-mit #region-us \n", "# Dataset Card for AMTTL", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Github\n- Repository: Github\n- Paper: Aclweb\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @JetRunner for adding this dataset." ]
[ 85, 8, 120, 33, 6, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 18 ]
[ "passage: TAGS\n#task_categories-token-classification #task_ids-parsing #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Chinese #license-mit #region-us \n# Dataset Card for AMTTL## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: Github\n- Repository: Github\n- Paper: Aclweb\n- Leaderboard:\n- Point of Contact:### Dataset Summary### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions\n\nThanks to @JetRunner for adding this dataset." ]
8e4813d81f46d313dac7892e1c28076917cfcdf9
# Dataset Card for "anli" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** [https://github.com/facebookresearch/anli/](https://github.com/facebookresearch/anli/) - **Paper:** [Adversarial NLI: A New Benchmark for Natural Language Understanding](https://arxiv.org/abs/1910.14599) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 18.62 MB - **Size of the generated dataset:** 77.12 MB - **Total amount of disk used:** 95.75 MB ### Dataset Summary The Adversarial Natural Language Inference (ANLI) is a new large-scale NLI benchmark dataset, The dataset is collected via an iterative, adversarial human-and-model-in-the-loop procedure. ANLI is much more difficult than its predecessors including SNLI and MNLI. It contains three rounds. Each round has train/dev/test splits. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages English ## Dataset Structure ### Data Instances #### plain_text - **Size of downloaded dataset files:** 18.62 MB - **Size of the generated dataset:** 77.12 MB - **Total amount of disk used:** 95.75 MB An example of 'train_r2' looks as follows. ``` This example was too long and was cropped: { "hypothesis": "Idris Sultan was born in the first month of the year preceding 1994.", "label": 0, "premise": "\"Idris Sultan (born January 1993) is a Tanzanian Actor and comedian, actor and radio host who won the Big Brother Africa-Hotshot...", "reason": "", "uid": "ed5c37ab-77c5-4dbc-ba75-8fd617b19712" } ``` ### Data Fields The data fields are the same among all splits. #### plain_text - `uid`: a `string` feature. - `premise`: a `string` feature. - `hypothesis`: a `string` feature. - `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2). - `reason`: a `string` feature. ### Data Splits | name |train_r1|dev_r1|train_r2|dev_r2|train_r3|dev_r3|test_r1|test_r2|test_r3| |----------|-------:|-----:|-------:|-----:|-------:|-----:|------:|------:|------:| |plain_text| 16946| 1000| 45460| 1000| 100459| 1200| 1000| 1000| 1200| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [cc-4 Attribution-NonCommercial](https://github.com/facebookresearch/anli/blob/main/LICENSE) ### Citation Information ``` @InProceedings{nie2019adversarial, title={Adversarial NLI: A New Benchmark for Natural Language Understanding}, author={Nie, Yixin and Williams, Adina and Dinan, Emily and Bansal, Mohit and Weston, Jason and Kiela, Douwe}, booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", year = "2020", publisher = "Association for Computational Linguistics", } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@easonnie](https://github.com/easonnie), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
facebook/anli
[ "task_categories:text-classification", "task_ids:natural-language-inference", "task_ids:multi-input-text-classification", "annotations_creators:crowdsourced", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "source_datasets:extended|hotpot_qa", "language:en", "license:cc-by-nc-4.0", "arxiv:1910.14599", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced", "machine-generated"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-nc-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original", "extended|hotpot_qa"], "task_categories": ["text-classification"], "task_ids": ["natural-language-inference", "multi-input-text-classification"], "paperswithcode_id": "anli", "pretty_name": "Adversarial NLI", "dataset_info": {"config_name": "plain_text", "features": [{"name": "uid", "dtype": "string"}, {"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "reason", "dtype": "string"}], "splits": [{"name": "train_r1", "num_bytes": 8006888, "num_examples": 16946}, {"name": "dev_r1", "num_bytes": 573428, "num_examples": 1000}, {"name": "test_r1", "num_bytes": 574917, "num_examples": 1000}, {"name": "train_r2", "num_bytes": 20801581, "num_examples": 45460}, {"name": "dev_r2", "num_bytes": 556066, "num_examples": 1000}, {"name": "test_r2", "num_bytes": 572639, "num_examples": 1000}, {"name": "train_r3", "num_bytes": 44720719, "num_examples": 100459}, {"name": "dev_r3", "num_bytes": 663148, "num_examples": 1200}, {"name": "test_r3", "num_bytes": 657586, "num_examples": 1200}], "download_size": 26286748, "dataset_size": 77126972}, "configs": [{"config_name": "plain_text", "data_files": [{"split": "train_r1", "path": "plain_text/train_r1-*"}, {"split": "dev_r1", "path": "plain_text/dev_r1-*"}, {"split": "test_r1", "path": "plain_text/test_r1-*"}, {"split": "train_r2", "path": "plain_text/train_r2-*"}, {"split": "dev_r2", "path": "plain_text/dev_r2-*"}, {"split": "test_r2", "path": "plain_text/test_r2-*"}, {"split": "train_r3", "path": "plain_text/train_r3-*"}, {"split": "dev_r3", "path": "plain_text/dev_r3-*"}, {"split": "test_r3", "path": "plain_text/test_r3-*"}], "default": true}]}
2023-12-21T15:34:02+00:00
[ "1910.14599" ]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-natural-language-inference #task_ids-multi-input-text-classification #annotations_creators-crowdsourced #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #source_datasets-extended|hotpot_qa #language-English #license-cc-by-nc-4.0 #arxiv-1910.14599 #region-us
Dataset Card for "anli" ======================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: * Repository: URL * Paper: Adversarial NLI: A New Benchmark for Natural Language Understanding * Point of Contact: * Size of downloaded dataset files: 18.62 MB * Size of the generated dataset: 77.12 MB * Total amount of disk used: 95.75 MB ### Dataset Summary The Adversarial Natural Language Inference (ANLI) is a new large-scale NLI benchmark dataset, The dataset is collected via an iterative, adversarial human-and-model-in-the-loop procedure. ANLI is much more difficult than its predecessors including SNLI and MNLI. It contains three rounds. Each round has train/dev/test splits. ### Supported Tasks and Leaderboards ### Languages English Dataset Structure ----------------- ### Data Instances #### plain\_text * Size of downloaded dataset files: 18.62 MB * Size of the generated dataset: 77.12 MB * Total amount of disk used: 95.75 MB An example of 'train\_r2' looks as follows. ### Data Fields The data fields are the same among all splits. #### plain\_text * 'uid': a 'string' feature. * 'premise': a 'string' feature. * 'hypothesis': a 'string' feature. * 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2). * 'reason': a 'string' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information cc-4 Attribution-NonCommercial ### Contributions Thanks to @thomwolf, @easonnie, @lhoestq, @patrickvonplaten for adding this dataset.
[ "### Dataset Summary\n\n\nThe Adversarial Natural Language Inference (ANLI) is a new large-scale NLI benchmark dataset,\nThe dataset is collected via an iterative, adversarial human-and-model-in-the-loop procedure.\nANLI is much more difficult than its predecessors including SNLI and MNLI.\nIt contains three rounds. Each round has train/dev/test splits.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### plain\\_text\n\n\n* Size of downloaded dataset files: 18.62 MB\n* Size of the generated dataset: 77.12 MB\n* Total amount of disk used: 95.75 MB\n\n\nAn example of 'train\\_r2' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### plain\\_text\n\n\n* 'uid': a 'string' feature.\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'reason': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\ncc-4 Attribution-NonCommercial", "### Contributions\n\n\nThanks to @thomwolf, @easonnie, @lhoestq, @patrickvonplaten for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-natural-language-inference #task_ids-multi-input-text-classification #annotations_creators-crowdsourced #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #source_datasets-extended|hotpot_qa #language-English #license-cc-by-nc-4.0 #arxiv-1910.14599 #region-us \n", "### Dataset Summary\n\n\nThe Adversarial Natural Language Inference (ANLI) is a new large-scale NLI benchmark dataset,\nThe dataset is collected via an iterative, adversarial human-and-model-in-the-loop procedure.\nANLI is much more difficult than its predecessors including SNLI and MNLI.\nIt contains three rounds. Each round has train/dev/test splits.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### plain\\_text\n\n\n* Size of downloaded dataset files: 18.62 MB\n* Size of the generated dataset: 77.12 MB\n* Total amount of disk used: 95.75 MB\n\n\nAn example of 'train\\_r2' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### plain\\_text\n\n\n* 'uid': a 'string' feature.\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'reason': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\ncc-4 Attribution-NonCommercial", "### Contributions\n\n\nThanks to @thomwolf, @easonnie, @lhoestq, @patrickvonplaten for adding this dataset." ]
[ 146, 97, 10, 12, 6, 58, 17, 89, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 15, 35 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-natural-language-inference #task_ids-multi-input-text-classification #annotations_creators-crowdsourced #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #source_datasets-extended|hotpot_qa #language-English #license-cc-by-nc-4.0 #arxiv-1910.14599 #region-us \n### Dataset Summary\n\n\nThe Adversarial Natural Language Inference (ANLI) is a new large-scale NLI benchmark dataset,\nThe dataset is collected via an iterative, adversarial human-and-model-in-the-loop procedure.\nANLI is much more difficult than its predecessors including SNLI and MNLI.\nIt contains three rounds. Each round has train/dev/test splits.### Supported Tasks and Leaderboards### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------### Data Instances#### plain\\_text\n\n\n* Size of downloaded dataset files: 18.62 MB\n* Size of the generated dataset: 77.12 MB\n* Total amount of disk used: 95.75 MB\n\n\nAn example of 'train\\_r2' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### plain\\_text\n\n\n* 'uid': a 'string' feature.\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'reason': a 'string' feature.### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?" ]
9eaa95f66364367e8752b0f34c00f67aafa95d15
# Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Home Page](https://github.com/sealuzh/user_quality) - **Repository:** [Repo Link](https://github.com/sealuzh/user_quality) - **Paper:** [Link](https://giograno.me/assets/pdf/workshop/wama17.pdf) - **Leaderboard: - **Point of Contact:** [Darshan Gandhi](darshangandhi1151@gmail.com) ### Dataset Summary It is a large dataset of Android applications belonging to 23 differentapps categories, which provides an overview of the types of feedback users report on the apps and documents the evolution of the related code metrics. The dataset contains about 395 applications of the F-Droid repository, including around 600 versions, 280,000 user reviews (extracted with specific text mining approaches) ### Supported Tasks and Leaderboards The dataset we provide comprises 395 different apps from F-Droid repository, including code quality indicators of 629 versions of these apps. It also encloses app reviews related to each of these versions, which have been automatically categorized classifying types of user feedback from a software maintenance and evolution perspective. ### Languages The dataset is a monolingual dataset which has the messages English. ## Dataset Structure ### Data Instances The dataset consists of a message in English. {'package_name': 'com.mantz_it.rfanalyzer', 'review': "Great app! The new version now works on my Bravia Android TV which is great as it's right by my rooftop aerial cable. The scan feature would be useful...any ETA on when this will be available? Also the option to import a list of bookmarks e.g. from a simple properties file would be useful.", 'date': 'October 12 2016', 'star': 4} ### Data Fields * package_name : Name of the Software Application Package * review : Message of the user * date : date when the user posted the review * star : rating provied by the user for the application ### Data Splits There is training data, with a total of : 288065 ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset With the help of this dataset one can try to understand more about software applications and what are the views and opinions of the users about them. This helps to understand more about which type of software applications are prefeered by the users and how do these applications facilitate the user to help them solve their problems and issues. ### Discussion of Biases The reviews are only for applications which are in the open-source software applications, the other sectors have not been considered here ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Giovanni Grano - (University of Zurich), Sebastiano Panichella - (University of Zurich), Andrea di Sorbo - (University of Sannio) ### Licensing Information [More Information Needed] ### Citation Information @InProceedings{Zurich Open Repository and Archive:dataset, title = {Software Applications User Reviews}, authors={Grano, Giovanni; Di Sorbo, Andrea; Mercaldo, Francesco; Visaggio, Corrado A; Canfora, Gerardo; Panichella, Sebastiano}, year={2017} } ### Contributions Thanks to [@darshan-gandhi](https://github.com/darshan-gandhi) for adding this dataset.
app_reviews
[ "task_categories:text-classification", "task_ids:text-scoring", "task_ids:sentiment-scoring", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["text-scoring", "sentiment-scoring"], "pretty_name": "AppReviews", "dataset_info": {"features": [{"name": "package_name", "dtype": "string"}, {"name": "review", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "star", "dtype": "int8"}], "splits": [{"name": "train", "num_bytes": 32768731, "num_examples": 288065}], "download_size": 13207727, "dataset_size": 32768731}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-09T12:30:17+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-text-scoring #task_ids-sentiment-scoring #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-unknown #region-us
# Dataset Card for [Dataset Name] ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: Home Page - Repository: Repo Link - Paper: Link - Leaderboard: - Point of Contact: Darshan Gandhi ### Dataset Summary It is a large dataset of Android applications belonging to 23 differentapps categories, which provides an overview of the types of feedback users report on the apps and documents the evolution of the related code metrics. The dataset contains about 395 applications of the F-Droid repository, including around 600 versions, 280,000 user reviews (extracted with specific text mining approaches) ### Supported Tasks and Leaderboards The dataset we provide comprises 395 different apps from F-Droid repository, including code quality indicators of 629 versions of these apps. It also encloses app reviews related to each of these versions, which have been automatically categorized classifying types of user feedback from a software maintenance and evolution perspective. ### Languages The dataset is a monolingual dataset which has the messages English. ## Dataset Structure ### Data Instances The dataset consists of a message in English. {'package_name': 'com.mantz_it.rfanalyzer', 'review': "Great app! The new version now works on my Bravia Android TV which is great as it's right by my rooftop aerial cable. The scan feature would be useful...any ETA on when this will be available? Also the option to import a list of bookmarks e.g. from a simple properties file would be useful.", 'date': 'October 12 2016', 'star': 4} ### Data Fields * package_name : Name of the Software Application Package * review : Message of the user * date : date when the user posted the review * star : rating provied by the user for the application ### Data Splits There is training data, with a total of : 288065 ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset With the help of this dataset one can try to understand more about software applications and what are the views and opinions of the users about them. This helps to understand more about which type of software applications are prefeered by the users and how do these applications facilitate the user to help them solve their problems and issues. ### Discussion of Biases The reviews are only for applications which are in the open-source software applications, the other sectors have not been considered here ### Other Known Limitations ## Additional Information ### Dataset Curators Giovanni Grano - (University of Zurich), Sebastiano Panichella - (University of Zurich), Andrea di Sorbo - (University of Sannio) ### Licensing Information @InProceedings{Zurich Open Repository and Archive:dataset, title = {Software Applications User Reviews}, authors={Grano, Giovanni; Di Sorbo, Andrea; Mercaldo, Francesco; Visaggio, Corrado A; Canfora, Gerardo; Panichella, Sebastiano}, year={2017} } ### Contributions Thanks to @darshan-gandhi for adding this dataset.
[ "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Home Page\n- Repository: Repo Link\n- Paper: Link\n- Leaderboard:\n- Point of Contact: Darshan Gandhi", "### Dataset Summary\n\nIt is a large dataset of Android applications belonging to 23 differentapps categories, which provides an overview of the types of feedback users report on the apps and documents the evolution of the related code metrics. The dataset contains about 395 applications of the F-Droid repository, including around 600 versions, 280,000 user reviews (extracted with specific text mining approaches)", "### Supported Tasks and Leaderboards\n\nThe dataset we provide comprises 395 different apps from F-Droid repository, including code quality indicators of 629 versions of these\napps. It also encloses app reviews related to each of these versions, which have been automatically categorized classifying types of user feedback from a software maintenance and evolution perspective.", "### Languages\n\nThe dataset is a monolingual dataset which has the messages English.", "## Dataset Structure", "### Data Instances\n\nThe dataset consists of a message in English.\n\n{'package_name': 'com.mantz_it.rfanalyzer',\n 'review': \"Great app! The new version now works on my Bravia Android TV which is great as it's right by my rooftop aerial cable. The scan feature would be useful...any ETA on when this will be available? Also the option to import a list of bookmarks e.g. from a simple properties file would be useful.\",\n 'date': 'October 12 2016',\n 'star': 4}", "### Data Fields\n\n* package_name : Name of the Software Application Package\n* review : Message of the user \n* date : date when the user posted the review \n* star : rating provied by the user for the application", "### Data Splits\n\nThere is training data, with a total of : 288065", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nWith the help of this dataset one can try to understand more about software applications and what are the views and opinions of the users about them. This helps to understand more about which type of software applications are prefeered by the users and how do these applications facilitate the user to help them solve their problems and issues.", "### Discussion of Biases\n\nThe reviews are only for applications which are in the open-source software applications, the other sectors have not been considered here", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nGiovanni Grano - (University of Zurich), Sebastiano Panichella - (University of Zurich), Andrea di Sorbo - (University of Sannio)", "### Licensing Information\n\n\n\n\n\n@InProceedings{Zurich Open Repository and\nArchive:dataset,\ntitle = {Software Applications User Reviews},\nauthors={Grano, Giovanni; Di Sorbo, Andrea; Mercaldo, Francesco; Visaggio, Corrado A; Canfora, Gerardo;\nPanichella, Sebastiano},\nyear={2017}\n}", "### Contributions\n\nThanks to @darshan-gandhi for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-text-scoring #task_ids-sentiment-scoring #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-unknown #region-us \n", "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Home Page\n- Repository: Repo Link\n- Paper: Link\n- Leaderboard:\n- Point of Contact: Darshan Gandhi", "### Dataset Summary\n\nIt is a large dataset of Android applications belonging to 23 differentapps categories, which provides an overview of the types of feedback users report on the apps and documents the evolution of the related code metrics. The dataset contains about 395 applications of the F-Droid repository, including around 600 versions, 280,000 user reviews (extracted with specific text mining approaches)", "### Supported Tasks and Leaderboards\n\nThe dataset we provide comprises 395 different apps from F-Droid repository, including code quality indicators of 629 versions of these\napps. It also encloses app reviews related to each of these versions, which have been automatically categorized classifying types of user feedback from a software maintenance and evolution perspective.", "### Languages\n\nThe dataset is a monolingual dataset which has the messages English.", "## Dataset Structure", "### Data Instances\n\nThe dataset consists of a message in English.\n\n{'package_name': 'com.mantz_it.rfanalyzer',\n 'review': \"Great app! The new version now works on my Bravia Android TV which is great as it's right by my rooftop aerial cable. The scan feature would be useful...any ETA on when this will be available? Also the option to import a list of bookmarks e.g. from a simple properties file would be useful.\",\n 'date': 'October 12 2016',\n 'star': 4}", "### Data Fields\n\n* package_name : Name of the Software Application Package\n* review : Message of the user \n* date : date when the user posted the review \n* star : rating provied by the user for the application", "### Data Splits\n\nThere is training data, with a total of : 288065", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nWith the help of this dataset one can try to understand more about software applications and what are the views and opinions of the users about them. This helps to understand more about which type of software applications are prefeered by the users and how do these applications facilitate the user to help them solve their problems and issues.", "### Discussion of Biases\n\nThe reviews are only for applications which are in the open-source software applications, the other sectors have not been considered here", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nGiovanni Grano - (University of Zurich), Sebastiano Panichella - (University of Zurich), Andrea di Sorbo - (University of Sannio)", "### Licensing Information\n\n\n\n\n\n@InProceedings{Zurich Open Repository and\nArchive:dataset,\ntitle = {Software Applications User Reviews},\nauthors={Grano, Giovanni; Di Sorbo, Andrea; Mercaldo, Francesco; Visaggio, Corrado A; Canfora, Gerardo;\nPanichella, Sebastiano},\nyear={2017}\n}", "### Contributions\n\nThanks to @darshan-gandhi for adding this dataset." ]
[ 101, 10, 120, 33, 92, 80, 20, 6, 133, 47, 18, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 73, 33, 7, 5, 42, 85, 19 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-text-scoring #task_ids-sentiment-scoring #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-unknown #region-us \n# Dataset Card for [Dataset Name]## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: Home Page\n- Repository: Repo Link\n- Paper: Link\n- Leaderboard:\n- Point of Contact: Darshan Gandhi### Dataset Summary\n\nIt is a large dataset of Android applications belonging to 23 differentapps categories, which provides an overview of the types of feedback users report on the apps and documents the evolution of the related code metrics. The dataset contains about 395 applications of the F-Droid repository, including around 600 versions, 280,000 user reviews (extracted with specific text mining approaches)### Supported Tasks and Leaderboards\n\nThe dataset we provide comprises 395 different apps from F-Droid repository, including code quality indicators of 629 versions of these\napps. It also encloses app reviews related to each of these versions, which have been automatically categorized classifying types of user feedback from a software maintenance and evolution perspective.### Languages\n\nThe dataset is a monolingual dataset which has the messages English.## Dataset Structure" ]
33301c6a050c96af81f63cad5562cb5363e88971
# Dataset Card for AQUA-RAT ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/deepmind/AQuA](https://github.com/deepmind/AQuA) - **Repository:** [https://github.com/deepmind/AQuA](https://github.com/deepmind/AQuA) - **Paper:** [https://arxiv.org/pdf/1705.04146.pdf](https://arxiv.org/pdf/1705.04146.pdf) ### Dataset Summary A large-scale dataset consisting of approximately 100,000 algebraic word problems. The solution to each question is explained step-by-step using natural language. This data is used to train a program generation model that learns to generate the explanation, while generating the program that solves the question. ### Supported Tasks and Leaderboards ### Languages en ## Dataset Structure ### Data Instances ``` { "question": "A grocery sells a bag of ice for $1.25, and makes 20% profit. If it sells 500 bags of ice, how much total profit does it make?", "options": ["A)125", "B)150", "C)225", "D)250", "E)275"], "rationale": "Profit per bag = 1.25 * 0.20 = 0.25\nTotal profit = 500 * 0.25 = 125\nAnswer is A.", "correct": "A" } ``` ### Data Fields - `question` : (str) A natural language definition of the problem to solve - `options` : (list(str)) 5 possible options (A, B, C, D and E), among which one is correct - `rationale` : (str) A natural language description of the solution to the problem - `correct` : (str) The correct option ### Data Splits | | Train | Valid | Test | | ----- | ------ | ----- | ---- | | Examples | 97467 | 254 | 254 | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ### Citation Information ``` @article{ling2017program, title={Program induction by rationale generation: Learning to solve and explain algebraic word problems}, author={Ling, Wang and Yogatama, Dani and Dyer, Chris and Blunsom, Phil}, journal={ACL}, year={2017} } ``` ### Contributions Thanks to [@arkhalid](https://github.com/arkhalid) for adding this dataset.
aqua_rat
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:apache-2.0", "arxiv:1705.04146", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["multiple-choice-qa"], "paperswithcode_id": "aqua-rat", "pretty_name": "Algebra Question Answering with Rationales", "dataset_info": [{"config_name": "raw", "features": [{"name": "question", "dtype": "string"}, {"name": "options", "sequence": "string"}, {"name": "rationale", "dtype": "string"}, {"name": "correct", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 42333059, "num_examples": 97467}, {"name": "test", "num_bytes": 116759, "num_examples": 254}, {"name": "validation", "num_bytes": 118616, "num_examples": 254}], "download_size": 25568676, "dataset_size": 42568434}, {"config_name": "tokenized", "features": [{"name": "question", "dtype": "string"}, {"name": "options", "sequence": "string"}, {"name": "rationale", "dtype": "string"}, {"name": "correct", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 46493643, "num_examples": 97467}, {"name": "test", "num_bytes": 126263, "num_examples": 254}, {"name": "validation", "num_bytes": 128853, "num_examples": 254}], "download_size": 26429873, "dataset_size": 46748759}], "configs": [{"config_name": "raw", "data_files": [{"split": "train", "path": "raw/train-*"}, {"split": "test", "path": "raw/test-*"}, {"split": "validation", "path": "raw/validation-*"}], "default": true}, {"config_name": "tokenized", "data_files": [{"split": "train", "path": "tokenized/train-*"}, {"split": "test", "path": "tokenized/test-*"}, {"split": "validation", "path": "tokenized/validation-*"}]}]}
2024-01-09T12:33:06+00:00
[ "1705.04146" ]
[ "en" ]
TAGS #task_categories-question-answering #task_ids-multiple-choice-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-apache-2.0 #arxiv-1705.04146 #region-us
Dataset Card for AQUA-RAT ========================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL ### Dataset Summary A large-scale dataset consisting of approximately 100,000 algebraic word problems. The solution to each question is explained step-by-step using natural language. This data is used to train a program generation model that learns to generate the explanation, while generating the program that solves the question. ### Supported Tasks and Leaderboards ### Languages en Dataset Structure ----------------- ### Data Instances ### Data Fields * 'question' : (str) A natural language definition of the problem to solve * 'options' : (list(str)) 5 possible options (A, B, C, D and E), among which one is correct * 'rationale' : (str) A natural language description of the solution to the problem * 'correct' : (str) The correct option ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at ``` URL ``` Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ### Contributions Thanks to @arkhalid for adding this dataset.
[ "### Dataset Summary\n\n\nA large-scale dataset consisting of approximately 100,000 algebraic word problems.\nThe solution to each question is explained step-by-step using natural language.\nThis data is used to train a program generation model that learns to generate the explanation,\nwhile generating the program that solves the question.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nen\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\n* 'question' : (str) A natural language definition of the problem to solve\n* 'options' : (list(str)) 5 possible options (A, B, C, D and E), among which one is correct\n* 'rationale' : (str) A natural language description of the solution to the problem\n* 'correct' : (str) The correct option", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCopyright 2017 Google Inc.\n\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n\n\n```\nURL\n\n```\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.", "### Contributions\n\n\nThanks to @arkhalid for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_ids-multiple-choice-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-apache-2.0 #arxiv-1705.04146 #region-us \n", "### Dataset Summary\n\n\nA large-scale dataset consisting of approximately 100,000 algebraic word problems.\nThe solution to each question is explained step-by-step using natural language.\nThis data is used to train a program generation model that learns to generate the explanation,\nwhile generating the program that solves the question.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nen\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\n* 'question' : (str) A natural language definition of the problem to solve\n* 'options' : (list(str)) 5 possible options (A, B, C, D and E), among which one is correct\n* 'rationale' : (str) A natural language description of the solution to the problem\n* 'correct' : (str) The correct option", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCopyright 2017 Google Inc.\n\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n\n\n```\nURL\n\n```\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.", "### Contributions\n\n\nThanks to @arkhalid for adding this dataset." ]
[ 115, 72, 10, 12, 6, 85, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 126, 17 ]
[ "passage: TAGS\n#task_categories-question-answering #task_ids-multiple-choice-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-apache-2.0 #arxiv-1705.04146 #region-us \n### Dataset Summary\n\n\nA large-scale dataset consisting of approximately 100,000 algebraic word problems.\nThe solution to each question is explained step-by-step using natural language.\nThis data is used to train a program generation model that learns to generate the explanation,\nwhile generating the program that solves the question.### Supported Tasks and Leaderboards### Languages\n\n\nen\n\n\nDataset Structure\n-----------------### Data Instances### Data Fields\n\n\n* 'question' : (str) A natural language definition of the problem to solve\n* 'options' : (list(str)) 5 possible options (A, B, C, D and E), among which one is correct\n* 'rationale' : (str) A natural language description of the solution to the problem\n* 'correct' : (str) The correct option### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators" ]
84df3ebd8bfe31e2875d242300161ea64ac2b06b
# Dataset Card for AQuaMuSe ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/google-research-datasets/aquamuse - **Repository:** https://github.com/google-research-datasets/aquamuse - **Paper:** https://arxiv.org/pdf/2010.12694.pdf - **Leaderboard:** - **Point of Contact:** ### Dataset Summary AQuaMuSe is a novel scalable approach to automatically mine dual query based multi-document summarization datasets for extractive and abstractive summaries using question answering dataset (Google Natural Questions) and large document corpora (Common Crawl) This dataset contains versions of automatically generated datasets for abstractive and extractive query-based multi-document summarization as described in [AQuaMuSe paper](https://arxiv.org/pdf/2010.12694.pdf). ### Supported Tasks and Leaderboards - **Abstractive** and **Extractive** query-based multi-document summarization - Question Answering ### Languages en : English ## Dataset Structure ### Data Instances - `input_urls`: a `list` of `string` features. - `query`: a `string` feature. - `target`: a `string` feature Example: ``` { 'input_urls': ['https://boxofficebuz.com/person/19653-charles-michael-davis'], 'query': 'who is the actor that plays marcel on the originals', 'target': "In February 2013, it was announced that Davis was cast in a lead role on The CW's new show The Originals, a spinoff of The Vampire Diaries, centered on the Original Family as they move to New Orleans, where Davis' character (a vampire named Marcel) currently rules." } ``` ### Data Fields - `input_urls`: a `list` of `string` features. - List of URLs to input documents pointing to [Common Crawl](https://commoncrawl.org/2017/07/june-2017-crawl-archive-now-available) to be summarized. - Dependencies: Documents URLs references the [Common Crawl June 2017 Archive](https://commoncrawl.org/2017/07/june-2017-crawl-archive-now-available). - `query`: a `string` feature. - Input query to be used as summarization context. This is derived from [Natural Questions](https://ai.google.com/research/NaturalQuestions/) user queries. - `target`: a `string` feature - Summarization target, derived from [Natural Questions](https://ai.google.com/research/NaturalQuestions/) long answers. ### Data Splits - This dataset has two high-level configurations `abstractive` and `extractive` - Each configuration has the data splits of `train`, `dev` and `test` - The original format of the data was in [TFrecords](https://www.tensorflow.org/tutorials/load_data/tfrecord), which has been parsed to the format as specified in [Data Instances](#data-instances) ## Dataset Creation ### Curation Rationale The dataset is automatically generated datasets for abstractive and extractive query-based multi-document summarization as described in [AQuaMuSe paper](https://arxiv.org/pdf/2010.12694.pdf). ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The dataset curator is [sayalikulkarni](https://github.com/google-research-datasets/aquamuse/commits?author=sayalikulkarni), who is the contributor for the official GitHub repository for this dataset and also one of the authors of this dataset’s paper. As the account handles of other authors are not available currently who were also part of the curation of this dataset, the authors of the paper are mentioned here as follows, Sayali Kulkarni, Sheide Chammas, Wan Zhu, Fei Sha, and Eugene Ie. ### Licensing Information [More Information Needed] ### Citation Information @misc{kulkarni2020aquamuse, title={AQuaMuSe: Automatically Generating Datasets for Query-Based Multi-Document Summarization}, author={Sayali Kulkarni and Sheide Chammas and Wan Zhu and Fei Sha and Eugene Ie}, year={2020}, eprint={2010.12694}, archivePrefix={arXiv}, primaryClass={cs.CL} } ### Contributions Thanks to [@Karthik-Bhaskar](https://github.com/Karthik-Bhaskar) for adding this dataset.
aquamuse
[ "task_categories:other", "task_categories:question-answering", "task_categories:text2text-generation", "task_ids:abstractive-qa", "task_ids:extractive-qa", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|natural_questions", "source_datasets:extended|other-Common-Crawl", "source_datasets:original", "language:en", "license:unknown", "query-based-multi-document-summarization", "arxiv:2010.12694", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced", "expert-generated"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["extended|natural_questions", "extended|other-Common-Crawl", "original"], "task_categories": ["other", "question-answering", "text2text-generation"], "task_ids": ["abstractive-qa", "extractive-qa"], "paperswithcode_id": "aquamuse", "pretty_name": "AQuaMuSe", "tags": ["query-based-multi-document-summarization"], "dataset_info": [{"config_name": "abstractive", "features": [{"name": "query", "dtype": "string"}, {"name": "input_urls", "sequence": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6434893, "num_examples": 6253}, {"name": "test", "num_bytes": 843165, "num_examples": 811}, {"name": "validation", "num_bytes": 689093, "num_examples": 661}], "download_size": 5167854, "dataset_size": 7967151}, {"config_name": "extractive", "features": [{"name": "query", "dtype": "string"}, {"name": "input_urls", "sequence": "string"}, {"name": "target", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6434893, "num_examples": 6253}, {"name": "test", "num_bytes": 843165, "num_examples": 811}, {"name": "validation", "num_bytes": 689093, "num_examples": 661}], "download_size": 5162151, "dataset_size": 7967151}], "configs": [{"config_name": "abstractive", "data_files": [{"split": "train", "path": "abstractive/train-*"}, {"split": "test", "path": "abstractive/test-*"}, {"split": "validation", "path": "abstractive/validation-*"}]}, {"config_name": "extractive", "data_files": [{"split": "train", "path": "extractive/train-*"}, {"split": "test", "path": "extractive/test-*"}, {"split": "validation", "path": "extractive/validation-*"}]}]}
2024-01-09T12:36:37+00:00
[ "2010.12694" ]
[ "en" ]
TAGS #task_categories-other #task_categories-question-answering #task_categories-text2text-generation #task_ids-abstractive-qa #task_ids-extractive-qa #annotations_creators-crowdsourced #annotations_creators-expert-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended|natural_questions #source_datasets-extended|other-Common-Crawl #source_datasets-original #language-English #license-unknown #query-based-multi-document-summarization #arxiv-2010.12694 #region-us
# Dataset Card for AQuaMuSe ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL - Leaderboard: - Point of Contact: ### Dataset Summary AQuaMuSe is a novel scalable approach to automatically mine dual query based multi-document summarization datasets for extractive and abstractive summaries using question answering dataset (Google Natural Questions) and large document corpora (Common Crawl) This dataset contains versions of automatically generated datasets for abstractive and extractive query-based multi-document summarization as described in AQuaMuSe paper. ### Supported Tasks and Leaderboards - Abstractive and Extractive query-based multi-document summarization - Question Answering ### Languages en : English ## Dataset Structure ### Data Instances - 'input_urls': a 'list' of 'string' features. - 'query': a 'string' feature. - 'target': a 'string' feature Example: ### Data Fields - 'input_urls': a 'list' of 'string' features. - List of URLs to input documents pointing to Common Crawl to be summarized. - Dependencies: Documents URLs references the Common Crawl June 2017 Archive. - 'query': a 'string' feature. - Input query to be used as summarization context. This is derived from Natural Questions user queries. - 'target': a 'string' feature - Summarization target, derived from Natural Questions long answers. ### Data Splits - This dataset has two high-level configurations 'abstractive' and 'extractive' - Each configuration has the data splits of 'train', 'dev' and 'test' - The original format of the data was in TFrecords, which has been parsed to the format as specified in Data Instances ## Dataset Creation ### Curation Rationale The dataset is automatically generated datasets for abstractive and extractive query-based multi-document summarization as described in AQuaMuSe paper. ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators The dataset curator is sayalikulkarni, who is the contributor for the official GitHub repository for this dataset and also one of the authors of this dataset’s paper. As the account handles of other authors are not available currently who were also part of the curation of this dataset, the authors of the paper are mentioned here as follows, Sayali Kulkarni, Sheide Chammas, Wan Zhu, Fei Sha, and Eugene Ie. ### Licensing Information @misc{kulkarni2020aquamuse, title={AQuaMuSe: Automatically Generating Datasets for Query-Based Multi-Document Summarization}, author={Sayali Kulkarni and Sheide Chammas and Wan Zhu and Fei Sha and Eugene Ie}, year={2020}, eprint={2010.12694}, archivePrefix={arXiv}, primaryClass={cs.CL} } ### Contributions Thanks to @Karthik-Bhaskar for adding this dataset.
[ "# Dataset Card for AQuaMuSe", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nAQuaMuSe is a novel scalable approach to automatically mine dual query based multi-document summarization datasets for extractive and abstractive summaries using question answering dataset (Google Natural Questions) and large document corpora (Common Crawl)\n\nThis dataset contains versions of automatically generated datasets for abstractive and extractive query-based multi-document summarization as described in AQuaMuSe paper.", "### Supported Tasks and Leaderboards\n\n- Abstractive and Extractive query-based multi-document summarization\n- Question Answering", "### Languages\n\nen : English", "## Dataset Structure", "### Data Instances\n\n- 'input_urls': a 'list' of 'string' features. \n- 'query': a 'string' feature.\n- 'target': a 'string' feature\n\n\nExample:", "### Data Fields\n\n - 'input_urls': a 'list' of 'string' features. \n - List of URLs to input documents pointing to Common Crawl to be summarized. \n - Dependencies: Documents URLs references the Common Crawl June 2017 Archive.\n \n - 'query': a 'string' feature.\n - Input query to be used as summarization context. This is derived from Natural Questions user queries.\n \n - 'target': a 'string' feature\n - Summarization target, derived from Natural Questions long answers.", "### Data Splits\n - This dataset has two high-level configurations 'abstractive' and 'extractive'\n - Each configuration has the data splits of 'train', 'dev' and 'test'\n - The original format of the data was in TFrecords, which has been parsed to the format as specified in Data Instances", "## Dataset Creation", "### Curation Rationale\n\nThe dataset is automatically generated datasets for abstractive and extractive query-based multi-document summarization as described in AQuaMuSe paper.", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThe dataset curator is sayalikulkarni, who is the contributor for the official GitHub repository for this dataset and also one of the authors of this dataset’s paper. As the account handles of other authors are not available currently who were also part of the curation of this dataset, the authors of the paper are mentioned here as follows, Sayali Kulkarni, Sheide Chammas, Wan Zhu, Fei Sha, and Eugene Ie.", "### Licensing Information\n\n\n\n\n\n@misc{kulkarni2020aquamuse,\n title={AQuaMuSe: Automatically Generating Datasets for Query-Based Multi-Document Summarization}, \n author={Sayali Kulkarni and Sheide Chammas and Wan Zhu and Fei Sha and Eugene Ie},\n year={2020},\n eprint={2010.12694},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}", "### Contributions\n\nThanks to @Karthik-Bhaskar for adding this dataset." ]
[ "TAGS\n#task_categories-other #task_categories-question-answering #task_categories-text2text-generation #task_ids-abstractive-qa #task_ids-extractive-qa #annotations_creators-crowdsourced #annotations_creators-expert-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended|natural_questions #source_datasets-extended|other-Common-Crawl #source_datasets-original #language-English #license-unknown #query-based-multi-document-summarization #arxiv-2010.12694 #region-us \n", "# Dataset Card for AQuaMuSe", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nAQuaMuSe is a novel scalable approach to automatically mine dual query based multi-document summarization datasets for extractive and abstractive summaries using question answering dataset (Google Natural Questions) and large document corpora (Common Crawl)\n\nThis dataset contains versions of automatically generated datasets for abstractive and extractive query-based multi-document summarization as described in AQuaMuSe paper.", "### Supported Tasks and Leaderboards\n\n- Abstractive and Extractive query-based multi-document summarization\n- Question Answering", "### Languages\n\nen : English", "## Dataset Structure", "### Data Instances\n\n- 'input_urls': a 'list' of 'string' features. \n- 'query': a 'string' feature.\n- 'target': a 'string' feature\n\n\nExample:", "### Data Fields\n\n - 'input_urls': a 'list' of 'string' features. \n - List of URLs to input documents pointing to Common Crawl to be summarized. \n - Dependencies: Documents URLs references the Common Crawl June 2017 Archive.\n \n - 'query': a 'string' feature.\n - Input query to be used as summarization context. This is derived from Natural Questions user queries.\n \n - 'target': a 'string' feature\n - Summarization target, derived from Natural Questions long answers.", "### Data Splits\n - This dataset has two high-level configurations 'abstractive' and 'extractive'\n - Each configuration has the data splits of 'train', 'dev' and 'test'\n - The original format of the data was in TFrecords, which has been parsed to the format as specified in Data Instances", "## Dataset Creation", "### Curation Rationale\n\nThe dataset is automatically generated datasets for abstractive and extractive query-based multi-document summarization as described in AQuaMuSe paper.", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThe dataset curator is sayalikulkarni, who is the contributor for the official GitHub repository for this dataset and also one of the authors of this dataset’s paper. As the account handles of other authors are not available currently who were also part of the curation of this dataset, the authors of the paper are mentioned here as follows, Sayali Kulkarni, Sheide Chammas, Wan Zhu, Fei Sha, and Eugene Ie.", "### Licensing Information\n\n\n\n\n\n@misc{kulkarni2020aquamuse,\n title={AQuaMuSe: Automatically Generating Datasets for Query-Based Multi-Document Summarization}, \n author={Sayali Kulkarni and Sheide Chammas and Wan Zhu and Fei Sha and Eugene Ie},\n year={2020},\n eprint={2010.12694},\n archivePrefix={arXiv},\n primaryClass={cs.CL}\n}", "### Contributions\n\nThanks to @Karthik-Bhaskar for adding this dataset." ]
[ 203, 10, 120, 27, 103, 31, 7, 6, 51, 128, 79, 5, 43, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 115, 111, 20 ]
[ "passage: TAGS\n#task_categories-other #task_categories-question-answering #task_categories-text2text-generation #task_ids-abstractive-qa #task_ids-extractive-qa #annotations_creators-crowdsourced #annotations_creators-expert-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended|natural_questions #source_datasets-extended|other-Common-Crawl #source_datasets-original #language-English #license-unknown #query-based-multi-document-summarization #arxiv-2010.12694 #region-us \n# Dataset Card for AQuaMuSe## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\nAQuaMuSe is a novel scalable approach to automatically mine dual query based multi-document summarization datasets for extractive and abstractive summaries using question answering dataset (Google Natural Questions) and large document corpora (Common Crawl)\n\nThis dataset contains versions of automatically generated datasets for abstractive and extractive query-based multi-document summarization as described in AQuaMuSe paper.### Supported Tasks and Leaderboards\n\n- Abstractive and Extractive query-based multi-document summarization\n- Question Answering### Languages\n\nen : English## Dataset Structure", "passage: ### Data Instances\n\n- 'input_urls': a 'list' of 'string' features. \n- 'query': a 'string' feature.\n- 'target': a 'string' feature\n\n\nExample:### Data Fields\n\n - 'input_urls': a 'list' of 'string' features. \n - List of URLs to input documents pointing to Common Crawl to be summarized. \n - Dependencies: Documents URLs references the Common Crawl June 2017 Archive.\n \n - 'query': a 'string' feature.\n - Input query to be used as summarization context. This is derived from Natural Questions user queries.\n \n - 'target': a 'string' feature\n - Summarization target, derived from Natural Questions long answers.### Data Splits\n - This dataset has two high-level configurations 'abstractive' and 'extractive'\n - Each configuration has the data splits of 'train', 'dev' and 'test'\n - The original format of the data was in TFrecords, which has been parsed to the format as specified in Data Instances## Dataset Creation### Curation Rationale\n\nThe dataset is automatically generated datasets for abstractive and extractive query-based multi-document summarization as described in AQuaMuSe paper.### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators\n\nThe dataset curator is sayalikulkarni, who is the contributor for the official GitHub repository for this dataset and also one of the authors of this dataset’s paper. As the account handles of other authors are not available currently who were also part of the curation of this dataset, the authors of the paper are mentioned here as follows, Sayali Kulkarni, Sheide Chammas, Wan Zhu, Fei Sha, and Eugene Ie." ]
447b2a5a20c9e8ffaee0f14b31697be7b0dec403
# Dataset Card for ArCOV19 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** https://gitlab.com/bigirqu/ArCOV-19 - **Paper:** [ArCOV-19: The First Arabic COVID-19 Twitter Dataset with Propagation Networks](https://arxiv.org/abs/2004.05861) - **Leaderboard:** [More Information Needed] - **Point of Contact:** [Fatima Haouari](mailto:200159617@qu.edu.qa) ### Dataset Summary ArCOV-19 is an Arabic COVID-19 Twitter dataset that covers the period from 27th of January till 5th of May 2021. ArCOV-19 is the first publicly-available Arabic Twitter dataset covering COVID-19 pandemic that includes about 3.2M tweets alongside the propagation networks of the most-popular subset of them (i.e., most-retweeted and-liked). The propagation networks include both retweets and conversational threads (i.e., threads of replies). ArCOV-19 is designed to enable research under several domains including natural language processing, information retrieval, and social computing, among others. Preliminary analysis shows that ArCOV-19 captures rising discussions associated with the first reported cases of the disease as they appeared in the Arab world. In addition to the source tweets and the propagation networks, we also release the search queries and the language-independent crawler used to collect the tweets to encourage the curation of similar datasets. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Arabic ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields tweet_id: the Twitter assigned ID for the tweet object. ### Data Splits [More Information Needed] ## Dataset Creation The dataset collection approach is presented in the following paper: [ArCOV-19: The First Arabic COVID-19 Twitter Dataset with Propagation Networks](https://arxiv.org/abs/2004.05861) ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations No annotation was provided with the dataset. #### Annotation process No annotation was provided with the dataset. #### Who are the annotators? No annotation was provided with the dataset. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators **Team:** [bigIR](https://sites.google.com/view/bigir) from Qatar University ([@bigIR_group](https://twitter.com/bigIR_group)) - [Fatima Haouari](mailto:200159617@qu.edu.qa) - [Maram Hasanain](mailto:maram.hasanain@qu.edu.qa) - [Reem Suwaileh](mailto:rs081123@qu.edu.qa) - [Dr. Tamer Elsayed](mailto:telsayed@qu.edu.qa) ### Licensing Information [More Information Needed] ### Citation Information ``` @article{haouari2020arcov19, title={ArCOV-19: The First Arabic COVID-19 Twitter Dataset with Propagation Networks}, author={Fatima Haouari and Maram Hasanain and Reem Suwaileh and Tamer Elsayed}, year={2021}, eprint={2004.05861}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@Fatima-Haouari](https://github.com/Fatima-Haouari) for adding this dataset.
bigIR/ar_cov19
[ "task_categories:other", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:ar", "data-mining", "arxiv:2004.05861", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["ar"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["other"], "task_ids": [], "paperswithcode_id": "arcov-19", "pretty_name": "ArCOV19", "tags": ["data-mining"], "dataset_info": {"config_name": "ar_cov19", "features": [{"name": "tweetID", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 72223634, "num_examples": 3140158}], "download_size": 23678407, "dataset_size": 72223634}}
2023-09-19T05:52:17+00:00
[ "2004.05861" ]
[ "ar" ]
TAGS #task_categories-other #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-Arabic #data-mining #arxiv-2004.05861 #region-us
# Dataset Card for ArCOV19 ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: - Repository: URL - Paper: ArCOV-19: The First Arabic COVID-19 Twitter Dataset with Propagation Networks - Leaderboard: - Point of Contact: Fatima Haouari ### Dataset Summary ArCOV-19 is an Arabic COVID-19 Twitter dataset that covers the period from 27th of January till 5th of May 2021. ArCOV-19 is the first publicly-available Arabic Twitter dataset covering COVID-19 pandemic that includes about 3.2M tweets alongside the propagation networks of the most-popular subset of them (i.e., most-retweeted and-liked). The propagation networks include both retweets and conversational threads (i.e., threads of replies). ArCOV-19 is designed to enable research under several domains including natural language processing, information retrieval, and social computing, among others. Preliminary analysis shows that ArCOV-19 captures rising discussions associated with the first reported cases of the disease as they appeared in the Arab world. In addition to the source tweets and the propagation networks, we also release the search queries and the language-independent crawler used to collect the tweets to encourage the curation of similar datasets. ### Supported Tasks and Leaderboards ### Languages Arabic ## Dataset Structure ### Data Instances ### Data Fields tweet_id: the Twitter assigned ID for the tweet object. ### Data Splits ## Dataset Creation The dataset collection approach is presented in the following paper: ArCOV-19: The First Arabic COVID-19 Twitter Dataset with Propagation Networks ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations No annotation was provided with the dataset. #### Annotation process No annotation was provided with the dataset. #### Who are the annotators? No annotation was provided with the dataset. ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators Team: bigIR from Qatar University (@bigIR_group) - Fatima Haouari - Maram Hasanain - Reem Suwaileh - Dr. Tamer Elsayed ### Licensing Information ### Contributions Thanks to @Fatima-Haouari for adding this dataset.
[ "# Dataset Card for ArCOV19", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: ArCOV-19: The First Arabic COVID-19 Twitter Dataset with Propagation Networks\n- Leaderboard: \n- Point of Contact: Fatima Haouari", "### Dataset Summary\n\nArCOV-19 is an Arabic COVID-19 Twitter dataset that covers the period from 27th of January till 5th of May 2021.\nArCOV-19 is the first publicly-available Arabic Twitter dataset covering COVID-19 pandemic that includes about 3.2M\ntweets alongside the propagation networks of the most-popular subset of them (i.e., most-retweeted and-liked).\nThe propagation networks include both retweets and conversational threads (i.e., threads of replies).\nArCOV-19 is designed to enable research under several domains including natural language processing, information\nretrieval, and social computing, among others. Preliminary analysis shows that ArCOV-19 captures rising discussions\nassociated with the first reported cases of the disease as they appeared in the Arab world. In addition to the source\ntweets and the propagation networks, we also release the search queries and the language-independent crawler used to\ncollect the tweets to encourage the curation of similar datasets.", "### Supported Tasks and Leaderboards", "### Languages\n\nArabic", "## Dataset Structure", "### Data Instances", "### Data Fields\n\ntweet_id: the Twitter assigned ID for the tweet object.", "### Data Splits", "## Dataset Creation\n\nThe dataset collection approach is presented in the following paper: ArCOV-19: The First Arabic COVID-19 Twitter Dataset with Propagation Networks", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations\nNo annotation was provided with the dataset.", "#### Annotation process\n\nNo annotation was provided with the dataset.", "#### Who are the annotators?\n\nNo annotation was provided with the dataset.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nTeam: bigIR from Qatar University (@bigIR_group)\n\n- Fatima Haouari\n- Maram Hasanain\n- Reem Suwaileh\n- Dr. Tamer Elsayed", "### Licensing Information", "### Contributions\n\nThanks to @Fatima-Haouari for adding this dataset." ]
[ "TAGS\n#task_categories-other #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-Arabic #data-mining #arxiv-2004.05861 #region-us \n", "# Dataset Card for ArCOV19", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: ArCOV-19: The First Arabic COVID-19 Twitter Dataset with Propagation Networks\n- Leaderboard: \n- Point of Contact: Fatima Haouari", "### Dataset Summary\n\nArCOV-19 is an Arabic COVID-19 Twitter dataset that covers the period from 27th of January till 5th of May 2021.\nArCOV-19 is the first publicly-available Arabic Twitter dataset covering COVID-19 pandemic that includes about 3.2M\ntweets alongside the propagation networks of the most-popular subset of them (i.e., most-retweeted and-liked).\nThe propagation networks include both retweets and conversational threads (i.e., threads of replies).\nArCOV-19 is designed to enable research under several domains including natural language processing, information\nretrieval, and social computing, among others. Preliminary analysis shows that ArCOV-19 captures rising discussions\nassociated with the first reported cases of the disease as they appeared in the Arab world. In addition to the source\ntweets and the propagation networks, we also release the search queries and the language-independent crawler used to\ncollect the tweets to encourage the curation of similar datasets.", "### Supported Tasks and Leaderboards", "### Languages\n\nArabic", "## Dataset Structure", "### Data Instances", "### Data Fields\n\ntweet_id: the Twitter assigned ID for the tweet object.", "### Data Splits", "## Dataset Creation\n\nThe dataset collection approach is presented in the following paper: ArCOV-19: The First Arabic COVID-19 Twitter Dataset with Propagation Networks", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations\nNo annotation was provided with the dataset.", "#### Annotation process\n\nNo annotation was provided with the dataset.", "#### Who are the annotators?\n\nNo annotation was provided with the dataset.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nTeam: bigIR from Qatar University (@bigIR_group)\n\n- Fatima Haouari\n- Maram Hasanain\n- Reem Suwaileh\n- Dr. Tamer Elsayed", "### Licensing Information", "### Contributions\n\nThanks to @Fatima-Haouari for adding this dataset." ]
[ 81, 9, 120, 49, 238, 10, 5, 6, 6, 19, 5, 38, 7, 4, 10, 10, 15, 15, 19, 8, 8, 7, 8, 7, 5, 44, 6, 20 ]
[ "passage: TAGS\n#task_categories-other #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-Arabic #data-mining #arxiv-2004.05861 #region-us \n# Dataset Card for ArCOV19## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: ArCOV-19: The First Arabic COVID-19 Twitter Dataset with Propagation Networks\n- Leaderboard: \n- Point of Contact: Fatima Haouari### Dataset Summary\n\nArCOV-19 is an Arabic COVID-19 Twitter dataset that covers the period from 27th of January till 5th of May 2021.\nArCOV-19 is the first publicly-available Arabic Twitter dataset covering COVID-19 pandemic that includes about 3.2M\ntweets alongside the propagation networks of the most-popular subset of them (i.e., most-retweeted and-liked).\nThe propagation networks include both retweets and conversational threads (i.e., threads of replies).\nArCOV-19 is designed to enable research under several domains including natural language processing, information\nretrieval, and social computing, among others. Preliminary analysis shows that ArCOV-19 captures rising discussions\nassociated with the first reported cases of the disease as they appeared in the Arab world. In addition to the source\ntweets and the propagation networks, we also release the search queries and the language-independent crawler used to\ncollect the tweets to encourage the curation of similar datasets.### Supported Tasks and Leaderboards" ]
d51bf2435d030e0041344f576c5e8d7154828977
# Dataset Card for ArRestReviews ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Large Arabic Sentiment Analysis Resources](https://github.com/hadyelsahar/large-arabic-sentiment-analysis-resouces) - **Repository:** [Large Arabic Sentiment Analysis Resources](https://github.com/hadyelsahar/large-arabic-sentiment-analysis-resouces) - **Paper:** [ Building Large Arabic Multi-domain Resources for Sentiment Analysis](https://github.com/hadyelsahar/large-arabic-sentiment-analysis-resouces/blob/master/Paper%20-%20Building%20Large%20Arabic%20Multi-domain%20Resources%20for%20Sentiment%20Analysis.pdf) - **Point of Contact:** [hady elsahar](hadyelsahar@gmail.com) ### Dataset Summary Dataset of 8364 restaurant reviews from qaym.com in Arabic for sentiment analysis ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The dataset is based on Arabic. ## Dataset Structure ### Data Instances A typical data point comprises of the following: - "polarity": which is a string value of either 0 or 1 indicating the sentiment around the review - "text": is the review plain text of a restaurant in Arabic - "restaurant_id": the restaurant ID on the website - "user_id": the user ID on the website example: ``` { 'polarity': 0, # negative 'restaurant_id': '1412', 'text': 'عادي جدا مامن زود', 'user_id': '21294' } ``` ### Data Fields - "polarity": is a string value of either 0 or 1 indicating the sentiment around the review - "text": is the review plain text of a restaurant in Arabic - "restaurant_id": the restaurant ID on the website (string) - "user_id": the user ID on the website (string) ### Data Splits The dataset is not split. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization Contains 8364 restaurant reviews from qaym.com #### Who are the source language producers? From tweeter. ### Annotations The polarity field provides a label of 1 or -1 pertaining to the sentiment of the review #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Discussion of Social Impact and Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information @InProceedings{10.1007/978-3-319-18117-2_2, author="ElSahar, Hady and El-Beltagy, Samhaa R.", editor="Gelbukh, Alexander", title="Building Large Arabic Multi-domain Resources for Sentiment Analysis", booktitle="Computational Linguistics and Intelligent Text Processing", year="2015", publisher="Springer International Publishing", address="Cham", pages="23--34", isbn="978-3-319-18117-2" } ### Contributions Thanks to [@abdulelahsm](https://github.com/abdulelahsm) for adding this dataset.
ar_res_reviews
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:ar", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["ar"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "ArRestReviews", "dataset_info": {"features": [{"name": "polarity", "dtype": {"class_label": {"names": {"0": "negative", "1": "positive"}}}}, {"name": "text", "dtype": "string"}, {"name": "restaurant_id", "dtype": "string"}, {"name": "user_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3617085, "num_examples": 8364}], "download_size": 1887029, "dataset_size": 3617085}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-09T12:38:13+00:00
[]
[ "ar" ]
TAGS #task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Arabic #license-unknown #region-us
# Dataset Card for ArRestReviews ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: Large Arabic Sentiment Analysis Resources - Repository: Large Arabic Sentiment Analysis Resources - Paper: Building Large Arabic Multi-domain Resources for Sentiment Analysis - Point of Contact: hady elsahar ### Dataset Summary Dataset of 8364 restaurant reviews from URL in Arabic for sentiment analysis ### Supported Tasks and Leaderboards ### Languages The dataset is based on Arabic. ## Dataset Structure ### Data Instances A typical data point comprises of the following: - "polarity": which is a string value of either 0 or 1 indicating the sentiment around the review - "text": is the review plain text of a restaurant in Arabic - "restaurant_id": the restaurant ID on the website - "user_id": the user ID on the website example: ### Data Fields - "polarity": is a string value of either 0 or 1 indicating the sentiment around the review - "text": is the review plain text of a restaurant in Arabic - "restaurant_id": the restaurant ID on the website (string) - "user_id": the user ID on the website (string) ### Data Splits The dataset is not split. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization Contains 8364 restaurant reviews from URL #### Who are the source language producers? From tweeter. ### Annotations The polarity field provides a label of 1 or -1 pertaining to the sentiment of the review #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Discussion of Social Impact and Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information @InProceedings{10.1007/978-3-319-18117-2_2, author="ElSahar, Hady and El-Beltagy, Samhaa R.", editor="Gelbukh, Alexander", title="Building Large Arabic Multi-domain Resources for Sentiment Analysis", booktitle="Computational Linguistics and Intelligent Text Processing", year="2015", publisher="Springer International Publishing", address="Cham", pages="23--34", isbn="978-3-319-18117-2" } ### Contributions Thanks to @abdulelahsm for adding this dataset.
[ "# Dataset Card for ArRestReviews", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Large Arabic Sentiment Analysis Resources\n- Repository: Large Arabic Sentiment Analysis Resources\n- Paper: Building Large Arabic Multi-domain Resources for Sentiment Analysis\n- Point of Contact: hady elsahar", "### Dataset Summary\n\nDataset of 8364 restaurant reviews from URL in Arabic for sentiment analysis", "### Supported Tasks and Leaderboards", "### Languages\n\nThe dataset is based on Arabic.", "## Dataset Structure", "### Data Instances\n\nA typical data point comprises of the following:\n\n- \"polarity\": which is a string value of either 0 or 1 indicating the sentiment around the review \n\n- \"text\": is the review plain text of a restaurant in Arabic\n\n- \"restaurant_id\": the restaurant ID on the website\n\n- \"user_id\": the user ID on the website\n\nexample:", "### Data Fields\n\n- \"polarity\": is a string value of either 0 or 1 indicating the sentiment around the review \n\n- \"text\": is the review plain text of a restaurant in Arabic\n\n- \"restaurant_id\": the restaurant ID on the website (string)\n\n- \"user_id\": the user ID on the website (string)", "### Data Splits\n\nThe dataset is not split.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\nContains 8364 restaurant reviews from URL", "#### Who are the source language producers?\n\nFrom tweeter.", "### Annotations\n\nThe polarity field provides a label of 1 or -1 pertaining to the sentiment of the review", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Discussion of Social Impact and Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\n\n\n\n\n@InProceedings{10.1007/978-3-319-18117-2_2,\nauthor=\"ElSahar, Hady\nand El-Beltagy, Samhaa R.\",\neditor=\"Gelbukh, Alexander\",\ntitle=\"Building Large Arabic Multi-domain Resources for Sentiment Analysis\",\nbooktitle=\"Computational Linguistics and Intelligent Text Processing\",\nyear=\"2015\",\npublisher=\"Springer International Publishing\",\naddress=\"Cham\",\npages=\"23--34\",\nisbn=\"978-3-319-18117-2\"\n}", "### Contributions\n\nThanks to @abdulelahsm for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Arabic #license-unknown #region-us \n", "# Dataset Card for ArRestReviews", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Large Arabic Sentiment Analysis Resources\n- Repository: Large Arabic Sentiment Analysis Resources\n- Paper: Building Large Arabic Multi-domain Resources for Sentiment Analysis\n- Point of Contact: hady elsahar", "### Dataset Summary\n\nDataset of 8364 restaurant reviews from URL in Arabic for sentiment analysis", "### Supported Tasks and Leaderboards", "### Languages\n\nThe dataset is based on Arabic.", "## Dataset Structure", "### Data Instances\n\nA typical data point comprises of the following:\n\n- \"polarity\": which is a string value of either 0 or 1 indicating the sentiment around the review \n\n- \"text\": is the review plain text of a restaurant in Arabic\n\n- \"restaurant_id\": the restaurant ID on the website\n\n- \"user_id\": the user ID on the website\n\nexample:", "### Data Fields\n\n- \"polarity\": is a string value of either 0 or 1 indicating the sentiment around the review \n\n- \"text\": is the review plain text of a restaurant in Arabic\n\n- \"restaurant_id\": the restaurant ID on the website (string)\n\n- \"user_id\": the user ID on the website (string)", "### Data Splits\n\nThe dataset is not split.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\nContains 8364 restaurant reviews from URL", "#### Who are the source language producers?\n\nFrom tweeter.", "### Annotations\n\nThe polarity field provides a label of 1 or -1 pertaining to the sentiment of the review", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Discussion of Social Impact and Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\n\n\n\n\n@InProceedings{10.1007/978-3-319-18117-2_2,\nauthor=\"ElSahar, Hady\nand El-Beltagy, Samhaa R.\",\neditor=\"Gelbukh, Alexander\",\ntitle=\"Building Large Arabic Multi-domain Resources for Sentiment Analysis\",\nbooktitle=\"Computational Linguistics and Intelligent Text Processing\",\nyear=\"2015\",\npublisher=\"Springer International Publishing\",\naddress=\"Cham\",\npages=\"23--34\",\nisbn=\"978-3-319-18117-2\"\n}", "### Contributions\n\nThanks to @abdulelahsm for adding this dataset." ]
[ 86, 11, 120, 55, 20, 10, 12, 6, 83, 75, 12, 5, 7, 4, 18, 14, 26, 5, 9, 8, 8, 11, 7, 5, 6, 127, 20 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Arabic #license-unknown #region-us \n# Dataset Card for ArRestReviews## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: Large Arabic Sentiment Analysis Resources\n- Repository: Large Arabic Sentiment Analysis Resources\n- Paper: Building Large Arabic Multi-domain Resources for Sentiment Analysis\n- Point of Contact: hady elsahar### Dataset Summary\n\nDataset of 8364 restaurant reviews from URL in Arabic for sentiment analysis### Supported Tasks and Leaderboards### Languages\n\nThe dataset is based on Arabic.## Dataset Structure### Data Instances\n\nA typical data point comprises of the following:\n\n- \"polarity\": which is a string value of either 0 or 1 indicating the sentiment around the review \n\n- \"text\": is the review plain text of a restaurant in Arabic\n\n- \"restaurant_id\": the restaurant ID on the website\n\n- \"user_id\": the user ID on the website\n\nexample:### Data Fields\n\n- \"polarity\": is a string value of either 0 or 1 indicating the sentiment around the review \n\n- \"text\": is the review plain text of a restaurant in Arabic\n\n- \"restaurant_id\": the restaurant ID on the website (string)\n\n- \"user_id\": the user ID on the website (string)### Data Splits\n\nThe dataset is not split.## Dataset Creation### Curation Rationale### Source Data" ]
557bf94ac6177cc442f42d0b09b6e4b76e8f47c9
# Dataset Card for ArSarcasm ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [GitHub](https://github.com/iabufarha/ArSarcasm) - **Paper:** https://www.aclweb.org/anthology/2020.osact-1.5/ ### Dataset Summary ArSarcasm is a new Arabic sarcasm detection dataset. The dataset was created using previously available Arabic sentiment analysis datasets ([SemEval 2017](https://www.aclweb.org/anthology/S17-2088.pdf) and [ASTD](https://www.aclweb.org/anthology/D15-1299.pdf)) and adds sarcasm and dialect labels to them. The dataset contains 10,547 tweets, 1,682 (16%) of which are sarcastic. For more details, please check the paper [From Arabic Sentiment Analysis to Sarcasm Detection: The ArSarcasm Dataset](https://www.aclweb.org/anthology/2020.osact-1.5/) ### Supported Tasks and Leaderboards You can get more information about an Arabic sarcasm tasks and leaderboard [here](https://sites.google.com/view/ar-sarcasm-sentiment-detection/). ### Languages Arabic (multiple dialects) ## Dataset Structure ### Data Instances ```javascript {'dialect': 1, 'original_sentiment': 0, 'sarcasm': 0, 'sentiment': 0, 'source': 'semeval', 'tweet': 'نصيحه ما عمرك اتنزل لعبة سوبر ماريو مش زي ما كنّا متوقعين الله يرحم ايامات السيقا والفاميلي #SuperMarioRun'} ``` ### Data Fields - tweet: the original tweet text - sarcasm: 0 for non-sarcastic, 1 for sarcastic - sentiment: 0 for negative, 1 for neutral, 2 for positive - original_sentiment: 0 for negative, 1 for neutral, 2 for positive - source: the original source of tweet: SemEval or ASTD - dialect: 0 for Egypt, 1 for Gulf, 2 for Levant, 3 for Magreb, 4 for Modern Standard Arabic (MSA) ### Data Splits The training set contains 8,437 tweets, while the test set contains 2,110 tweets. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization The dataset was created using previously available Arabic sentiment analysis datasets (SemEval 2017 and ASTD) and adds sarcasm and dialect labels to them. #### Who are the source language producers? SemEval 2017 and ASTD ### Annotations #### Annotation process For the annotation process, we used Figure-Eight crowdsourcing platform. Our main objective was to annotate the data for sarcasm detection, but due to the challenges imposed by dialectal variations, we decided to add the annotation for dialects. We also include a new annotation for sentiment labels in order to have a glimpse of the variability and subjectivity between different annotators. Thus, the annotators were asked to provide three labels for each tweet as the following: - Sarcasm: sarcastic or non-sarcastic. - Sentiment: positive, negative or neutral. - Dialect: Egyptian, Gulf, Levantine, Maghrebi or Modern Standard Arabic (MSA). #### Who are the annotators? Figure-Eight crowdsourcing platform ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators - Ibrahim Abu-Farha - Walid Magdy ### Licensing Information MIT ### Citation Information ``` @inproceedings{abu-farha-magdy-2020-arabic, title = "From {A}rabic Sentiment Analysis to Sarcasm Detection: The {A}r{S}arcasm Dataset", author = "Abu Farha, Ibrahim and Magdy, Walid", booktitle = "Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection", month = may, year = "2020", address = "Marseille, France", publisher = "European Language Resource Association", url = "https://www.aclweb.org/anthology/2020.osact-1.5", pages = "32--39", language = "English", ISBN = "979-10-95546-51-1", } ``` ### Contributions Thanks to [@mapmeld](https://github.com/mapmeld) for adding this dataset.
ar_sarcasm
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|other-semeval_2017", "source_datasets:extended|other-astd", "language:ar", "license:mit", "sarcasm-detection", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["ar"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|other-semeval_2017", "extended|other-astd"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "ArSarcasm", "tags": ["sarcasm-detection"], "dataset_info": {"features": [{"name": "dialect", "dtype": {"class_label": {"names": {"0": "egypt", "1": "gulf", "2": "levant", "3": "magreb", "4": "msa"}}}}, {"name": "sarcasm", "dtype": {"class_label": {"names": {"0": "non-sarcastic", "1": "sarcastic"}}}}, {"name": "sentiment", "dtype": {"class_label": {"names": {"0": "negative", "1": "neutral", "2": "positive"}}}}, {"name": "original_sentiment", "dtype": {"class_label": {"names": {"0": "negative", "1": "neutral", "2": "positive"}}}}, {"name": "tweet", "dtype": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1829159, "num_examples": 8437}, {"name": "test", "num_bytes": 458210, "num_examples": 2110}], "download_size": 1180619, "dataset_size": 2287369}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
2024-01-09T12:42:05+00:00
[]
[ "ar" ]
TAGS #task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-semeval_2017 #source_datasets-extended|other-astd #language-Arabic #license-mit #sarcasm-detection #region-us
# Dataset Card for ArSarcasm ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Repository: GitHub - Paper: URL ### Dataset Summary ArSarcasm is a new Arabic sarcasm detection dataset. The dataset was created using previously available Arabic sentiment analysis datasets (SemEval 2017 and ASTD) and adds sarcasm and dialect labels to them. The dataset contains 10,547 tweets, 1,682 (16%) of which are sarcastic. For more details, please check the paper From Arabic Sentiment Analysis to Sarcasm Detection: The ArSarcasm Dataset ### Supported Tasks and Leaderboards You can get more information about an Arabic sarcasm tasks and leaderboard here. ### Languages Arabic (multiple dialects) ## Dataset Structure ### Data Instances ### Data Fields - tweet: the original tweet text - sarcasm: 0 for non-sarcastic, 1 for sarcastic - sentiment: 0 for negative, 1 for neutral, 2 for positive - original_sentiment: 0 for negative, 1 for neutral, 2 for positive - source: the original source of tweet: SemEval or ASTD - dialect: 0 for Egypt, 1 for Gulf, 2 for Levant, 3 for Magreb, 4 for Modern Standard Arabic (MSA) ### Data Splits The training set contains 8,437 tweets, while the test set contains 2,110 tweets. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization The dataset was created using previously available Arabic sentiment analysis datasets (SemEval 2017 and ASTD) and adds sarcasm and dialect labels to them. #### Who are the source language producers? SemEval 2017 and ASTD ### Annotations #### Annotation process For the annotation process, we used Figure-Eight crowdsourcing platform. Our main objective was to annotate the data for sarcasm detection, but due to the challenges imposed by dialectal variations, we decided to add the annotation for dialects. We also include a new annotation for sentiment labels in order to have a glimpse of the variability and subjectivity between different annotators. Thus, the annotators were asked to provide three labels for each tweet as the following: - Sarcasm: sarcastic or non-sarcastic. - Sentiment: positive, negative or neutral. - Dialect: Egyptian, Gulf, Levantine, Maghrebi or Modern Standard Arabic (MSA). #### Who are the annotators? Figure-Eight crowdsourcing platform ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators - Ibrahim Abu-Farha - Walid Magdy ### Licensing Information MIT ### Contributions Thanks to @mapmeld for adding this dataset.
[ "# Dataset Card for ArSarcasm", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Repository: GitHub\n- Paper: URL", "### Dataset Summary\n\nArSarcasm is a new Arabic sarcasm detection dataset.\nThe dataset was created using previously available Arabic sentiment analysis\ndatasets (SemEval 2017\nand ASTD) and adds sarcasm and\ndialect labels to them.\n\nThe dataset contains 10,547 tweets, 1,682 (16%) of which are sarcastic.\n\nFor more details, please check the paper\nFrom Arabic Sentiment Analysis to Sarcasm Detection: The ArSarcasm Dataset", "### Supported Tasks and Leaderboards\n\nYou can get more information about an Arabic sarcasm tasks and leaderboard\nhere.", "### Languages\n\nArabic (multiple dialects)", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- tweet: the original tweet text\n- sarcasm: 0 for non-sarcastic, 1 for sarcastic\n- sentiment: 0 for negative, 1 for neutral, 2 for positive\n- original_sentiment: 0 for negative, 1 for neutral, 2 for positive\n- source: the original source of tweet: SemEval or ASTD\n- dialect: 0 for Egypt, 1 for Gulf, 2 for Levant, 3 for Magreb, 4 for Modern Standard Arabic (MSA)", "### Data Splits\n\nThe training set contains 8,437 tweets, while the test set contains 2,110 tweets.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe dataset was created using previously available Arabic sentiment analysis datasets (SemEval 2017 and ASTD) and adds sarcasm and dialect labels to them.", "#### Who are the source language producers?\n\nSemEval 2017 and ASTD", "### Annotations", "#### Annotation process\n\nFor the annotation process, we used Figure-Eight\ncrowdsourcing platform. Our main objective was to annotate the\ndata for sarcasm detection, but due to the challenges imposed by dialectal variations, we decided to add the annotation for dialects. We also include a new annotation for\nsentiment labels in order to have a glimpse of the variability and subjectivity between different annotators. Thus, the\nannotators were asked to provide three labels for each tweet\nas the following:\n\n- Sarcasm: sarcastic or non-sarcastic.\n- Sentiment: positive, negative or neutral.\n- Dialect: Egyptian, Gulf, Levantine, Maghrebi or Modern Standard Arabic (MSA).", "#### Who are the annotators?\n\nFigure-Eight crowdsourcing platform", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\n- Ibrahim Abu-Farha\n- Walid Magdy", "### Licensing Information\n\nMIT", "### Contributions\n\nThanks to @mapmeld for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-semeval_2017 #source_datasets-extended|other-astd #language-Arabic #license-mit #sarcasm-detection #region-us \n", "# Dataset Card for ArSarcasm", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Repository: GitHub\n- Paper: URL", "### Dataset Summary\n\nArSarcasm is a new Arabic sarcasm detection dataset.\nThe dataset was created using previously available Arabic sentiment analysis\ndatasets (SemEval 2017\nand ASTD) and adds sarcasm and\ndialect labels to them.\n\nThe dataset contains 10,547 tweets, 1,682 (16%) of which are sarcastic.\n\nFor more details, please check the paper\nFrom Arabic Sentiment Analysis to Sarcasm Detection: The ArSarcasm Dataset", "### Supported Tasks and Leaderboards\n\nYou can get more information about an Arabic sarcasm tasks and leaderboard\nhere.", "### Languages\n\nArabic (multiple dialects)", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- tweet: the original tweet text\n- sarcasm: 0 for non-sarcastic, 1 for sarcastic\n- sentiment: 0 for negative, 1 for neutral, 2 for positive\n- original_sentiment: 0 for negative, 1 for neutral, 2 for positive\n- source: the original source of tweet: SemEval or ASTD\n- dialect: 0 for Egypt, 1 for Gulf, 2 for Levant, 3 for Magreb, 4 for Modern Standard Arabic (MSA)", "### Data Splits\n\nThe training set contains 8,437 tweets, while the test set contains 2,110 tweets.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe dataset was created using previously available Arabic sentiment analysis datasets (SemEval 2017 and ASTD) and adds sarcasm and dialect labels to them.", "#### Who are the source language producers?\n\nSemEval 2017 and ASTD", "### Annotations", "#### Annotation process\n\nFor the annotation process, we used Figure-Eight\ncrowdsourcing platform. Our main objective was to annotate the\ndata for sarcasm detection, but due to the challenges imposed by dialectal variations, we decided to add the annotation for dialects. We also include a new annotation for\nsentiment labels in order to have a glimpse of the variability and subjectivity between different annotators. Thus, the\nannotators were asked to provide three labels for each tweet\nas the following:\n\n- Sarcasm: sarcastic or non-sarcastic.\n- Sentiment: positive, negative or neutral.\n- Dialect: Egyptian, Gulf, Levantine, Maghrebi or Modern Standard Arabic (MSA).", "#### Who are the annotators?\n\nFigure-Eight crowdsourcing platform", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\n- Ibrahim Abu-Farha\n- Walid Magdy", "### Licensing Information\n\nMIT", "### Contributions\n\nThanks to @mapmeld for adding this dataset." ]
[ 119, 10, 120, 16, 111, 28, 11, 6, 6, 107, 27, 5, 7, 4, 47, 17, 5, 167, 17, 8, 8, 7, 8, 7, 5, 17, 7, 16 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-semeval_2017 #source_datasets-extended|other-astd #language-Arabic #license-mit #sarcasm-detection #region-us \n# Dataset Card for ArSarcasm## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Repository: GitHub\n- Paper: URL### Dataset Summary\n\nArSarcasm is a new Arabic sarcasm detection dataset.\nThe dataset was created using previously available Arabic sentiment analysis\ndatasets (SemEval 2017\nand ASTD) and adds sarcasm and\ndialect labels to them.\n\nThe dataset contains 10,547 tweets, 1,682 (16%) of which are sarcastic.\n\nFor more details, please check the paper\nFrom Arabic Sentiment Analysis to Sarcasm Detection: The ArSarcasm Dataset### Supported Tasks and Leaderboards\n\nYou can get more information about an Arabic sarcasm tasks and leaderboard\nhere.### Languages\n\nArabic (multiple dialects)## Dataset Structure### Data Instances" ]
c948146dc6e63d56b3469be209ea7e35a4ed5579
# Dataset Card for Arabic Billion Words Corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://www.abuelkhair.net/index.php/en/arabic/abu-el-khair-corpus - **Repository:** - **Paper:** https://arxiv.org/pdf/1611.04033 - **Leaderboard:** - **Point of Contact:**[Ibrahim Abu El-Khair](iabuelkhair@gmail.com) ### Dataset Summary Abu El-Khair Corpus is an Arabic text corpus, that includes more than five million newspaper articles. It contains over a billion and a half words in total, out of which, there are about three million unique words. The corpus is encoded with two types of encoding, namely: UTF-8, and Windows CP-1256. Also it was marked with two mark-up languages, namely: SGML, and XML. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Arabic ## Dataset Structure ### Data Instances This is an example of the "Almasryalyoum" configuration subset: ```python { "url": "http://today.almasryalyoum.com/printerfriendly.aspx?ArticleID=61300", "head_line": "رئيس وزراء المجر: عنصرية جماهير أوجبيست جلبت العار للبلاد", "date": "19/5/2007", "text": """قال متحدث باسم الحكومة المجرية: إن رئيس الوزراء فيرنك جيوركساني رحب بقرار اتحاد كرة القدم المجري بخصم ثلاث نقاط من نادي أوجبيست بسبب السلوك العنصري الذي صدر من جماهيره. وعاقب الاتحاد المجري فريق أوجبيست بعد أن سخرت جماهيره من إبراهيم سيديبي مهاجم فريق ديبرينسين الأسود أثناء مباراة الفريقين أوائل مايو الجاري. يذكر أن الاتحاد فرض أيضا غرامة مالية قدرها 20 ألف دولار علي أوجبيست في عام 2005 بعد أن رددت جماهيره شعارات معادية للسامية خلال مباراة بالدوري المجري. وأوضح جيوركساني في خطاب إلي إيستفان كيستليكي رئيس الاتحاد المجري لكرة القدم، أن هذا السلوك العنصري من الجماهير «جلب العار لكرة القدم وللمجر». يذكر أن المجر بها مجموعة من مشجعي كرة القدم المشاغبين «الهوليجانز»، وشارك الكثير منهم في أعمال شغب معادية للحكومة في العام الماضي.""", } ``` ### Data Fields The data fields are: - "url": string, original url of the article, - "head_line": string, headline of the article, - "date": string, date of the article, - "text": string, text content of the article, ### Data Splits There is only one "training" split for all configuration subsets, containing the following number of examples: | | Number of examples | |:---------------|-------------------:| | Alittihad | 349342 | | Almasryalyoum | 291723 | | Almustaqbal | 446873 | | Alqabas | 817274 | | Echoroukonline | 139732 | | Ryiadh | 858188 | | Sabanews | 92149 | | SaudiYoum | 888068 | | Techreen | 314597 | | Youm7 | 1172136 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @article{el20161, title={1.5 billion words arabic corpus}, author={El-Khair, Ibrahim Abu}, journal={arXiv preprint arXiv:1611.04033}, year={2016} } ``` ### Contributions Thanks to [@zaidalyafeai](https://github.com/zaidalyafeai) and [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
arabic_billion_words
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "size_categories:1M<n<10M", "source_datasets:original", "language:ar", "license:unknown", "arxiv:1611.04033", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["ar"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M", "10K<n<100K", "1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "Arabic Billion Words", "config_names": ["Alittihad", "Almasryalyoum", "Almustaqbal", "Alqabas", "Echoroukonline", "Ryiadh", "Sabanews", "SaudiYoum", "Techreen", "Youm7"], "dataset_info": [{"config_name": "Alittihad", "features": [{"name": "url", "dtype": "string"}, {"name": "head_line", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1601790302, "num_examples": 349342}], "download_size": 348259999, "dataset_size": 1601790302}, {"config_name": "Almasryalyoum", "features": [{"name": "url", "dtype": "string"}, {"name": "head_line", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1056197870, "num_examples": 291723}], "download_size": 242604438, "dataset_size": 1056197870}, {"config_name": "Almustaqbal", "features": [{"name": "url", "dtype": "string"}, {"name": "head_line", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1545659336, "num_examples": 446873}], "download_size": 350826797, "dataset_size": 1545659336}, {"config_name": "Alqabas", "features": [{"name": "url", "dtype": "string"}, {"name": "head_line", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2631729746, "num_examples": 817274}], "download_size": 595274646, "dataset_size": 2631729746}, {"config_name": "Echoroukonline", "features": [{"name": "url", "dtype": "string"}, {"name": "head_line", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 464386206, "num_examples": 139732}], "download_size": 108184378, "dataset_size": 464386206}, {"config_name": "Ryiadh", "features": [{"name": "url", "dtype": "string"}, {"name": "head_line", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3101294859, "num_examples": 858188}], "download_size": 691264971, "dataset_size": 3101294859}, {"config_name": "Sabanews", "features": [{"name": "url", "dtype": "string"}, {"name": "head_line", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 198019614, "num_examples": 92149}], "download_size": 38214558, "dataset_size": 198019614}, {"config_name": "SaudiYoum", "features": [{"name": "url", "dtype": "string"}, {"name": "head_line", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2723291416, "num_examples": 888068}], "download_size": 605537923, "dataset_size": 2723291416}, {"config_name": "Techreen", "features": [{"name": "url", "dtype": "string"}, {"name": "head_line", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1103458209, "num_examples": 314597}], "download_size": 252976781, "dataset_size": 1103458209}, {"config_name": "Youm7", "features": [{"name": "url", "dtype": "string"}, {"name": "head_line", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3004689464, "num_examples": 1172136}], "download_size": 617708074, "dataset_size": 3004689464}]}
2024-01-18T11:01:47+00:00
[ "1611.04033" ]
[ "ar" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #size_categories-10K<n<100K #size_categories-1M<n<10M #source_datasets-original #language-Arabic #license-unknown #arxiv-1611.04033 #region-us
Dataset Card for Arabic Billion Words Corpus ============================================ Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: URL * Leaderboard: * Point of Contact:Ibrahim Abu El-Khair ### Dataset Summary Abu El-Khair Corpus is an Arabic text corpus, that includes more than five million newspaper articles. It contains over a billion and a half words in total, out of which, there are about three million unique words. The corpus is encoded with two types of encoding, namely: UTF-8, and Windows CP-1256. Also it was marked with two mark-up languages, namely: SGML, and XML. ### Supported Tasks and Leaderboards ### Languages Arabic Dataset Structure ----------------- ### Data Instances This is an example of the "Almasryalyoum" configuration subset: ### Data Fields The data fields are: * "url": string, original url of the article, * "head\_line": string, headline of the article, * "date": string, date of the article, * "text": string, text content of the article, ### Data Splits There is only one "training" split for all configuration subsets, containing the following number of examples: Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @zaidalyafeai and @albertvillanova for adding this dataset.
[ "### Dataset Summary\n\n\nAbu El-Khair Corpus is an Arabic text corpus, that includes more than five million newspaper articles.\nIt contains over a billion and a half words in total, out of which, there are about three million unique words.\nThe corpus is encoded with two types of encoding, namely: UTF-8, and Windows CP-1256.\nAlso it was marked with two mark-up languages, namely: SGML, and XML.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nArabic\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThis is an example of the \"Almasryalyoum\" configuration subset:", "### Data Fields\n\n\nThe data fields are:\n\n\n* \"url\": string, original url of the article,\n* \"head\\_line\": string, headline of the article,\n* \"date\": string, date of the article,\n* \"text\": string, text content of the article,", "### Data Splits\n\n\nThere is only one \"training\" split for all configuration subsets, containing the following number of examples:\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @zaidalyafeai and @albertvillanova for adding this dataset." ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #size_categories-10K<n<100K #size_categories-1M<n<10M #source_datasets-original #language-Arabic #license-unknown #arxiv-1611.04033 #region-us \n", "### Dataset Summary\n\n\nAbu El-Khair Corpus is an Arabic text corpus, that includes more than five million newspaper articles.\nIt contains over a billion and a half words in total, out of which, there are about three million unique words.\nThe corpus is encoded with two types of encoding, namely: UTF-8, and Windows CP-1256.\nAlso it was marked with two mark-up languages, namely: SGML, and XML.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nArabic\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThis is an example of the \"Almasryalyoum\" configuration subset:", "### Data Fields\n\n\nThe data fields are:\n\n\n* \"url\": string, original url of the article,\n* \"head\\_line\": string, headline of the article,\n* \"date\": string, date of the article,\n* \"text\": string, text content of the article,", "### Data Splits\n\n\nThere is only one \"training\" split for all configuration subsets, containing the following number of examples:\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @zaidalyafeai and @albertvillanova for adding this dataset." ]
[ 142, 100, 10, 12, 24, 66, 35, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 26 ]
[ "passage: TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #size_categories-10K<n<100K #size_categories-1M<n<10M #source_datasets-original #language-Arabic #license-unknown #arxiv-1611.04033 #region-us \n### Dataset Summary\n\n\nAbu El-Khair Corpus is an Arabic text corpus, that includes more than five million newspaper articles.\nIt contains over a billion and a half words in total, out of which, there are about three million unique words.\nThe corpus is encoded with two types of encoding, namely: UTF-8, and Windows CP-1256.\nAlso it was marked with two mark-up languages, namely: SGML, and XML.### Supported Tasks and Leaderboards### Languages\n\n\nArabic\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nThis is an example of the \"Almasryalyoum\" configuration subset:### Data Fields\n\n\nThe data fields are:\n\n\n* \"url\": string, original url of the article,\n* \"head\\_line\": string, headline of the article,\n* \"date\": string, date of the article,\n* \"text\": string, text content of the article,### Data Splits\n\n\nThere is only one \"training\" split for all configuration subsets, containing the following number of examples:\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators### Licensing Information" ]
897e2cecae33a242f5003922d3f1564f0c55c3dd
# Dataset Card for Arabic POS Dialect ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://alt.qcri.org/resources/da_resources/ - **Repository:** https://github.com/qcri/dialectal_arabic_resources - **Paper:** http://www.lrec-conf.org/proceedings/lrec2018/pdf/562.pdf - **Contacts:** - Ahmed Abdelali < aabdelali @ hbku dot edu dot qa > - Kareem Darwish < kdarwish @ hbku dot edu dot qa > - Hamdy Mubarak < hmubarak @ hbku dot edu dot qa > ### Dataset Summary This dataset was created to support part of speech (POS) tagging in dialects of Arabic. It contains sets of 350 manually segmented and POS tagged tweets for each of four dialects: Egyptian, Levantine, Gulf, and Maghrebi. ### Supported Tasks and Leaderboards The dataset can be used to train a model for Arabic token segmentation and part of speech tagging in Arabic dialects. Success on this task is typically measured by achieving a high accuracy over a held out dataset. Darwish et al. (2018) train a CRF model across all four dialects and achieve an average accuracy of 89.3%. ### Languages The BCP-47 code is ar-Arab. The dataset consists of four dialects of Arabic, Egyptian (EGY), Levantine (LEV), Gulf (GLF), and Maghrebi (MGR), written in Arabic script. ## Dataset Structure ### Data Instances Below is a partial example from the Egyptian set: ``` - `Fold`: 4 - `SubFold`: A - `Word`: [ليه, لما, تحب, حد, من, قلبك, ...] - `Segmentation`: [ليه, لما, تحب, حد, من, قلب+ك, ...] - `POS`: [PART, PART, V, NOUN, PREP, NOUN+PRON, ...] ``` ### Data Fields The `fold` and the `subfold` fields refer to the crossfold validation splits used by Darwish et al., which can be generated using this [script](https://github.com/qcri/dialectal_arabic_resources/blob/master/generate_splits.sh). - `fold`: An int32 indicating which fold the instance was in for the crossfold validation - `subfold`: A string, either 'A' or 'B', indicating which subfold the instance was in for the crossfold validation - `words`: A sequence of strings of the unsegmented token - `segments`: A sequence of strings consisting of the segments of the word separated by '+' if there is more than one segment - `pos_tags`: A sequence of strings of the part of speech tags of the segments separated by '+' if there is more than one segment The POS tags consist of a set developed by [Darwish et al. (2017)](https://www.aclweb.org/anthology/W17-1316.pdf) for Modern Standard Arabic (MSA) plus an additional 6 tags (2 dialect-specific tags and 4 tweet-specific tags). | Tag | Purpose | Description | | ----- | ------ | ----- | | ADV | MSA | Adverb | | ADJ | MSA | Adjective | | CONJ | MSA | Conjunction | | DET | MSA | Determiner | | NOUN | MSA | Noun | | NSUFF | MSA | Noun suffix | | NUM | MSA | Number | | PART | MSA | Particle | | PREP | MSA | Preposition | | PRON | MSA | Pronoun | | PUNC | MSA | Preposition | | V | MSA | Verb | | ABBREV | MSA | Abbreviation | | CASE | MSA | Alef of tanween fatha | | JUS | MSA | Jussification attached to verbs | | VSUFF | MSA | Verb Suffix | | FOREIGN | MSA | Non-Arabic as well as non-MSA words | | FUR_PART | MSA | Future particle "s" prefix and "swf" | | PROG_PART | Dialect | Progressive particle | | NEG_PART | Dialect | Negation particle | | HASH | Tweet | Hashtag | | EMOT | Tweet | Emoticon/Emoji | | MENTION | Tweet | Mention | | URL | Tweet | URL | ### Data Splits The dataset is split by dialect. | Dialect | Tweets | Words | | ----- | ------ | ----- | | Egyptian (EGY) | 350 | 7481 | | Levantine (LEV) | 350 | 7221 | | Gulf (GLF) | 350 | 6767 | | Maghrebi (MGR) | 350 | 6400 | ## Dataset Creation ### Curation Rationale This dataset was created to address the lack of computational resources available for dialects of Arabic. These dialects are typically used in speech, while written forms of the language are typically in Modern Standard Arabic. Social media, however, has provided a venue for people to use dialects in written format. ### Source Data This dataset builds off of the work of [Eldesouki et al. (2017)](https://arxiv.org/pdf/1708.05891.pdf) and [Samih et al. (2017b)](https://www.aclweb.org/anthology/K17-1043.pdf) who originally collected the tweets. #### Initial Data Collection and Normalization They started with 175 million Arabic tweets returned by the Twitter API using the query "lang:ar" in March 2014. They then filtered this set using author-identified locations and tokens that are unique to each dialect. Finally, they had native speakers of each dialect select 350 tweets that were heavily accented. #### Who are the source language producers? The source language producers are people who posted on Twitter in Arabic using dialectal words from countries where the dialects of interest were spoken, as identified in [Mubarak and Darwish (2014)](https://www.aclweb.org/anthology/W14-3601.pdf). ### Annotations #### Annotation process The segmentation guidelines are available at https://alt.qcri.org/resources1/da_resources/seg-guidelines.pdf. The tagging guidelines are not provided, but Darwish at al. note that there were multiple rounds of quality control and revision. #### Who are the annotators? The POS tags were annotated by native speakers of each dialect. Further information is not known. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset Darwish et al find that the accuracy on the Maghrebi dataset suffered the most when the training set was from another dialect, and conversely training on Maghrebi yielded the worst results for all the other dialects. They suggest that Egyptian, Levantine, and Gulf may be more similar to each other and Maghrebi the most dissimilar to all of them. They also find that training on Modern Standard Arabic (MSA) and testing on dialects yielded significantly lower results compared to training on dialects and testing on MSA. This suggests that dialectal variation should be a significant consideration for future work in Arabic NLP applications, particularly when working with social media text. ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was curated by Kareem Darwish, Hamdy Mubarak, Mohamed Eldesouki and Ahmed Abdelali with the Qatar Computing Research Institute (QCRI), Younes Samih and Laura Kallmeyer with the University of Dusseldorf, Randah Alharbi and Walid Magdy with the University of Edinburgh, and Mohammed Attia with Google. No funding information was included. ### Licensing Information This dataset is licensed under the [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0). ### Citation Information Kareem Darwish, Hamdy Mubarak, Ahmed Abdelali, Mohamed Eldesouki, Younes Samih, Randah Alharbi, Mohammed Attia, Walid Magdy and Laura Kallmeyer (2018) Multi-Dialect Arabic POS Tagging: A CRF Approach. Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), May 7-12, 2018. Miyazaki, Japan. ``` @InProceedings{DARWISH18.562, author = {Kareem Darwish ,Hamdy Mubarak ,Ahmed Abdelali ,Mohamed Eldesouki ,Younes Samih ,Randah Alharbi ,Mohammed Attia ,Walid Magdy and Laura Kallmeyer}, title = {Multi-Dialect Arabic POS Tagging: A CRF Approach}, booktitle = {Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)}, year = {2018}, month = {may}, date = {7-12}, location = {Miyazaki, Japan}, editor = {Nicoletta Calzolari (Conference chair) and Khalid Choukri and Christopher Cieri and Thierry Declerck and Sara Goggi and Koiti Hasida and Hitoshi Isahara and Bente Maegaard and Joseph Mariani and Hélène Mazo and Asuncion Moreno and Jan Odijk and Stelios Piperidis and Takenobu Tokunaga}, publisher = {European Language Resources Association (ELRA)}, address = {Paris, France}, isbn = {979-10-95546-00-9}, language = {english} } ``` ### Contributions Thanks to [@mcmillanmajora](https://github.com/mcmillanmajora) for adding this dataset.
arabic_pos_dialect
[ "task_categories:token-classification", "task_ids:part-of-speech", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:multilingual", "size_categories:n<1K", "source_datasets:extended", "language:ar", "license:apache-2.0", "arxiv:1708.05891", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["ar"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": ["n<1K"], "source_datasets": ["extended"], "task_categories": ["token-classification"], "task_ids": ["part-of-speech"], "pretty_name": "Arabic POS Dialect", "dataset_info": [{"config_name": "egy", "features": [{"name": "fold", "dtype": "int32"}, {"name": "subfold", "dtype": "string"}, {"name": "words", "sequence": "string"}, {"name": "segments", "sequence": "string"}, {"name": "pos_tags", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 269629, "num_examples": 350}], "download_size": 89684, "dataset_size": 269629}, {"config_name": "glf", "features": [{"name": "fold", "dtype": "int32"}, {"name": "subfold", "dtype": "string"}, {"name": "words", "sequence": "string"}, {"name": "segments", "sequence": "string"}, {"name": "pos_tags", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 239883, "num_examples": 350}], "download_size": 89178, "dataset_size": 239883}, {"config_name": "lev", "features": [{"name": "fold", "dtype": "int32"}, {"name": "subfold", "dtype": "string"}, {"name": "words", "sequence": "string"}, {"name": "segments", "sequence": "string"}, {"name": "pos_tags", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 263102, "num_examples": 350}], "download_size": 97055, "dataset_size": 263102}, {"config_name": "mgr", "features": [{"name": "fold", "dtype": "int32"}, {"name": "subfold", "dtype": "string"}, {"name": "words", "sequence": "string"}, {"name": "segments", "sequence": "string"}, {"name": "pos_tags", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 245717, "num_examples": 350}], "download_size": 90503, "dataset_size": 245717}], "configs": [{"config_name": "egy", "data_files": [{"split": "train", "path": "egy/train-*"}]}, {"config_name": "glf", "data_files": [{"split": "train", "path": "glf/train-*"}]}, {"config_name": "lev", "data_files": [{"split": "train", "path": "lev/train-*"}]}, {"config_name": "mgr", "data_files": [{"split": "train", "path": "mgr/train-*"}]}]}
2024-01-09T12:43:34+00:00
[ "1708.05891" ]
[ "ar" ]
TAGS #task_categories-token-classification #task_ids-part-of-speech #annotations_creators-expert-generated #language_creators-found #multilinguality-multilingual #size_categories-n<1K #source_datasets-extended #language-Arabic #license-apache-2.0 #arxiv-1708.05891 #region-us
Dataset Card for Arabic POS Dialect =================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Contacts: * Ahmed Abdelali < aabdelali @ hbku dot edu dot qa > * Kareem Darwish < kdarwish @ hbku dot edu dot qa > * Hamdy Mubarak < hmubarak @ hbku dot edu dot qa > ### Dataset Summary This dataset was created to support part of speech (POS) tagging in dialects of Arabic. It contains sets of 350 manually segmented and POS tagged tweets for each of four dialects: Egyptian, Levantine, Gulf, and Maghrebi. ### Supported Tasks and Leaderboards The dataset can be used to train a model for Arabic token segmentation and part of speech tagging in Arabic dialects. Success on this task is typically measured by achieving a high accuracy over a held out dataset. Darwish et al. (2018) train a CRF model across all four dialects and achieve an average accuracy of 89.3%. ### Languages The BCP-47 code is ar-Arab. The dataset consists of four dialects of Arabic, Egyptian (EGY), Levantine (LEV), Gulf (GLF), and Maghrebi (MGR), written in Arabic script. Dataset Structure ----------------- ### Data Instances Below is a partial example from the Egyptian set: ### Data Fields The 'fold' and the 'subfold' fields refer to the crossfold validation splits used by Darwish et al., which can be generated using this script. * 'fold': An int32 indicating which fold the instance was in for the crossfold validation * 'subfold': A string, either 'A' or 'B', indicating which subfold the instance was in for the crossfold validation * 'words': A sequence of strings of the unsegmented token * 'segments': A sequence of strings consisting of the segments of the word separated by '+' if there is more than one segment * 'pos\_tags': A sequence of strings of the part of speech tags of the segments separated by '+' if there is more than one segment The POS tags consist of a set developed by Darwish et al. (2017) for Modern Standard Arabic (MSA) plus an additional 6 tags (2 dialect-specific tags and 4 tweet-specific tags). Tag: ADV, Purpose: MSA, Description: Adverb Tag: ADJ, Purpose: MSA, Description: Adjective Tag: CONJ, Purpose: MSA, Description: Conjunction Tag: DET, Purpose: MSA, Description: Determiner Tag: NOUN, Purpose: MSA, Description: Noun Tag: NSUFF, Purpose: MSA, Description: Noun suffix Tag: NUM, Purpose: MSA, Description: Number Tag: PART, Purpose: MSA, Description: Particle Tag: PREP, Purpose: MSA, Description: Preposition Tag: PRON, Purpose: MSA, Description: Pronoun Tag: PUNC, Purpose: MSA, Description: Preposition Tag: V, Purpose: MSA, Description: Verb Tag: ABBREV, Purpose: MSA, Description: Abbreviation Tag: CASE, Purpose: MSA, Description: Alef of tanween fatha Tag: JUS, Purpose: MSA, Description: Jussification attached to verbs Tag: VSUFF, Purpose: MSA, Description: Verb Suffix Tag: FOREIGN, Purpose: MSA, Description: Non-Arabic as well as non-MSA words Tag: FUR\_PART, Purpose: MSA, Description: Future particle "s" prefix and "swf" Tag: PROG\_PART, Purpose: Dialect, Description: Progressive particle Tag: NEG\_PART, Purpose: Dialect, Description: Negation particle Tag: HASH, Purpose: Tweet, Description: Hashtag Tag: EMOT, Purpose: Tweet, Description: Emoticon/Emoji Tag: MENTION, Purpose: Tweet, Description: Mention Tag: URL, Purpose: Tweet, Description: URL ### Data Splits The dataset is split by dialect. Dialect: Egyptian (EGY), Tweets: 350, Words: 7481 Dialect: Levantine (LEV), Tweets: 350, Words: 7221 Dialect: Gulf (GLF), Tweets: 350, Words: 6767 Dialect: Maghrebi (MGR), Tweets: 350, Words: 6400 Dataset Creation ---------------- ### Curation Rationale This dataset was created to address the lack of computational resources available for dialects of Arabic. These dialects are typically used in speech, while written forms of the language are typically in Modern Standard Arabic. Social media, however, has provided a venue for people to use dialects in written format. ### Source Data This dataset builds off of the work of Eldesouki et al. (2017) and Samih et al. (2017b) who originally collected the tweets. #### Initial Data Collection and Normalization They started with 175 million Arabic tweets returned by the Twitter API using the query "lang:ar" in March 2014. They then filtered this set using author-identified locations and tokens that are unique to each dialect. Finally, they had native speakers of each dialect select 350 tweets that were heavily accented. #### Who are the source language producers? The source language producers are people who posted on Twitter in Arabic using dialectal words from countries where the dialects of interest were spoken, as identified in Mubarak and Darwish (2014). ### Annotations #### Annotation process The segmentation guidelines are available at URL The tagging guidelines are not provided, but Darwish at al. note that there were multiple rounds of quality control and revision. #### Who are the annotators? The POS tags were annotated by native speakers of each dialect. Further information is not known. ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset Darwish et al find that the accuracy on the Maghrebi dataset suffered the most when the training set was from another dialect, and conversely training on Maghrebi yielded the worst results for all the other dialects. They suggest that Egyptian, Levantine, and Gulf may be more similar to each other and Maghrebi the most dissimilar to all of them. They also find that training on Modern Standard Arabic (MSA) and testing on dialects yielded significantly lower results compared to training on dialects and testing on MSA. This suggests that dialectal variation should be a significant consideration for future work in Arabic NLP applications, particularly when working with social media text. ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators This dataset was curated by Kareem Darwish, Hamdy Mubarak, Mohamed Eldesouki and Ahmed Abdelali with the Qatar Computing Research Institute (QCRI), Younes Samih and Laura Kallmeyer with the University of Dusseldorf, Randah Alharbi and Walid Magdy with the University of Edinburgh, and Mohammed Attia with Google. No funding information was included. ### Licensing Information This dataset is licensed under the Apache License, Version 2.0. Kareem Darwish, Hamdy Mubarak, Ahmed Abdelali, Mohamed Eldesouki, Younes Samih, Randah Alharbi, Mohammed Attia, Walid Magdy and Laura Kallmeyer (2018) Multi-Dialect Arabic POS Tagging: A CRF Approach. Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), May 7-12, 2018. Miyazaki, Japan. ### Contributions Thanks to @mcmillanmajora for adding this dataset.
[ "### Dataset Summary\n\n\nThis dataset was created to support part of speech (POS) tagging in dialects of Arabic. It contains sets of 350 manually segmented and POS tagged tweets for each of four dialects: Egyptian, Levantine, Gulf, and Maghrebi.", "### Supported Tasks and Leaderboards\n\n\nThe dataset can be used to train a model for Arabic token segmentation and part of speech tagging in Arabic dialects. Success on this task is typically measured by achieving a high accuracy over a held out dataset. Darwish et al. (2018) train a CRF model across all four dialects and achieve an average accuracy of 89.3%.", "### Languages\n\n\nThe BCP-47 code is ar-Arab. The dataset consists of four dialects of Arabic, Egyptian (EGY), Levantine (LEV), Gulf (GLF), and Maghrebi (MGR), written in Arabic script.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nBelow is a partial example from the Egyptian set:", "### Data Fields\n\n\nThe 'fold' and the 'subfold' fields refer to the crossfold validation splits used by Darwish et al., which can be generated using this script.\n\n\n* 'fold': An int32 indicating which fold the instance was in for the crossfold validation\n* 'subfold': A string, either 'A' or 'B', indicating which subfold the instance was in for the crossfold validation\n* 'words': A sequence of strings of the unsegmented token\n* 'segments': A sequence of strings consisting of the segments of the word separated by '+' if there is more than one segment\n* 'pos\\_tags': A sequence of strings of the part of speech tags of the segments separated by '+' if there is more than one segment\n\n\nThe POS tags consist of a set developed by Darwish et al. (2017) for Modern Standard Arabic (MSA) plus an additional 6 tags (2 dialect-specific tags and 4 tweet-specific tags).\n\n\nTag: ADV, Purpose: MSA, Description: Adverb\nTag: ADJ, Purpose: MSA, Description: Adjective\nTag: CONJ, Purpose: MSA, Description: Conjunction\nTag: DET, Purpose: MSA, Description: Determiner\nTag: NOUN, Purpose: MSA, Description: Noun\nTag: NSUFF, Purpose: MSA, Description: Noun suffix\nTag: NUM, Purpose: MSA, Description: Number\nTag: PART, Purpose: MSA, Description: Particle\nTag: PREP, Purpose: MSA, Description: Preposition\nTag: PRON, Purpose: MSA, Description: Pronoun\nTag: PUNC, Purpose: MSA, Description: Preposition\nTag: V, Purpose: MSA, Description: Verb\nTag: ABBREV, Purpose: MSA, Description: Abbreviation\nTag: CASE, Purpose: MSA, Description: Alef of tanween fatha\nTag: JUS, Purpose: MSA, Description: Jussification attached to verbs\nTag: VSUFF, Purpose: MSA, Description: Verb Suffix\nTag: FOREIGN, Purpose: MSA, Description: Non-Arabic as well as non-MSA words\nTag: FUR\\_PART, Purpose: MSA, Description: Future particle \"s\" prefix and \"swf\"\nTag: PROG\\_PART, Purpose: Dialect, Description: Progressive particle\nTag: NEG\\_PART, Purpose: Dialect, Description: Negation particle\nTag: HASH, Purpose: Tweet, Description: Hashtag\nTag: EMOT, Purpose: Tweet, Description: Emoticon/Emoji\nTag: MENTION, Purpose: Tweet, Description: Mention\nTag: URL, Purpose: Tweet, Description: URL", "### Data Splits\n\n\nThe dataset is split by dialect.\n\n\nDialect: Egyptian (EGY), Tweets: 350, Words: 7481\nDialect: Levantine (LEV), Tweets: 350, Words: 7221\nDialect: Gulf (GLF), Tweets: 350, Words: 6767\nDialect: Maghrebi (MGR), Tweets: 350, Words: 6400\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThis dataset was created to address the lack of computational resources available for dialects of Arabic. These dialects are typically used in speech, while written forms of the language are typically in Modern Standard Arabic. Social media, however, has provided a venue for people to use dialects in written format.", "### Source Data\n\n\nThis dataset builds off of the work of Eldesouki et al. (2017) and Samih et al. (2017b) who originally collected the tweets.", "#### Initial Data Collection and Normalization\n\n\nThey started with 175 million Arabic tweets returned by the Twitter API using the query \"lang:ar\" in March 2014. They then filtered this set using author-identified locations and tokens that are unique to each dialect. Finally, they had native speakers of each dialect select 350 tweets that were heavily accented.", "#### Who are the source language producers?\n\n\nThe source language producers are people who posted on Twitter in Arabic using dialectal words from countries where the dialects of interest were spoken, as identified in Mubarak and Darwish (2014).", "### Annotations", "#### Annotation process\n\n\nThe segmentation guidelines are available at URL The tagging guidelines are not provided, but Darwish at al. note that there were multiple rounds of quality control and revision.", "#### Who are the annotators?\n\n\nThe POS tags were annotated by native speakers of each dialect. Further information is not known.", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nDarwish et al find that the accuracy on the Maghrebi dataset suffered the most when the training set was from another dialect, and conversely training on Maghrebi yielded the worst results for all the other dialects. They suggest that Egyptian, Levantine, and Gulf may be more similar to each other and Maghrebi the most dissimilar to all of them. They also find that training on Modern Standard Arabic (MSA) and testing on dialects yielded significantly lower results compared to training on dialects and testing on MSA. This suggests that dialectal variation should be a significant consideration for future work in Arabic NLP applications, particularly when working with social media text.", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThis dataset was curated by Kareem Darwish, Hamdy Mubarak, Mohamed Eldesouki and Ahmed Abdelali with the Qatar Computing Research Institute (QCRI), Younes Samih and Laura Kallmeyer with the University of Dusseldorf, Randah Alharbi and Walid Magdy with the University of Edinburgh, and Mohammed Attia with Google. No funding information was included.", "### Licensing Information\n\n\nThis dataset is licensed under the Apache License, Version 2.0.\n\n\nKareem Darwish, Hamdy Mubarak, Ahmed Abdelali, Mohamed Eldesouki, Younes Samih, Randah Alharbi, Mohammed Attia, Walid Magdy and Laura Kallmeyer (2018) Multi-Dialect Arabic POS Tagging: A CRF Approach. Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), May 7-12, 2018. Miyazaki, Japan.", "### Contributions\n\n\nThanks to @mcmillanmajora for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #task_ids-part-of-speech #annotations_creators-expert-generated #language_creators-found #multilinguality-multilingual #size_categories-n<1K #source_datasets-extended #language-Arabic #license-apache-2.0 #arxiv-1708.05891 #region-us \n", "### Dataset Summary\n\n\nThis dataset was created to support part of speech (POS) tagging in dialects of Arabic. It contains sets of 350 manually segmented and POS tagged tweets for each of four dialects: Egyptian, Levantine, Gulf, and Maghrebi.", "### Supported Tasks and Leaderboards\n\n\nThe dataset can be used to train a model for Arabic token segmentation and part of speech tagging in Arabic dialects. Success on this task is typically measured by achieving a high accuracy over a held out dataset. Darwish et al. (2018) train a CRF model across all four dialects and achieve an average accuracy of 89.3%.", "### Languages\n\n\nThe BCP-47 code is ar-Arab. The dataset consists of four dialects of Arabic, Egyptian (EGY), Levantine (LEV), Gulf (GLF), and Maghrebi (MGR), written in Arabic script.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nBelow is a partial example from the Egyptian set:", "### Data Fields\n\n\nThe 'fold' and the 'subfold' fields refer to the crossfold validation splits used by Darwish et al., which can be generated using this script.\n\n\n* 'fold': An int32 indicating which fold the instance was in for the crossfold validation\n* 'subfold': A string, either 'A' or 'B', indicating which subfold the instance was in for the crossfold validation\n* 'words': A sequence of strings of the unsegmented token\n* 'segments': A sequence of strings consisting of the segments of the word separated by '+' if there is more than one segment\n* 'pos\\_tags': A sequence of strings of the part of speech tags of the segments separated by '+' if there is more than one segment\n\n\nThe POS tags consist of a set developed by Darwish et al. (2017) for Modern Standard Arabic (MSA) plus an additional 6 tags (2 dialect-specific tags and 4 tweet-specific tags).\n\n\nTag: ADV, Purpose: MSA, Description: Adverb\nTag: ADJ, Purpose: MSA, Description: Adjective\nTag: CONJ, Purpose: MSA, Description: Conjunction\nTag: DET, Purpose: MSA, Description: Determiner\nTag: NOUN, Purpose: MSA, Description: Noun\nTag: NSUFF, Purpose: MSA, Description: Noun suffix\nTag: NUM, Purpose: MSA, Description: Number\nTag: PART, Purpose: MSA, Description: Particle\nTag: PREP, Purpose: MSA, Description: Preposition\nTag: PRON, Purpose: MSA, Description: Pronoun\nTag: PUNC, Purpose: MSA, Description: Preposition\nTag: V, Purpose: MSA, Description: Verb\nTag: ABBREV, Purpose: MSA, Description: Abbreviation\nTag: CASE, Purpose: MSA, Description: Alef of tanween fatha\nTag: JUS, Purpose: MSA, Description: Jussification attached to verbs\nTag: VSUFF, Purpose: MSA, Description: Verb Suffix\nTag: FOREIGN, Purpose: MSA, Description: Non-Arabic as well as non-MSA words\nTag: FUR\\_PART, Purpose: MSA, Description: Future particle \"s\" prefix and \"swf\"\nTag: PROG\\_PART, Purpose: Dialect, Description: Progressive particle\nTag: NEG\\_PART, Purpose: Dialect, Description: Negation particle\nTag: HASH, Purpose: Tweet, Description: Hashtag\nTag: EMOT, Purpose: Tweet, Description: Emoticon/Emoji\nTag: MENTION, Purpose: Tweet, Description: Mention\nTag: URL, Purpose: Tweet, Description: URL", "### Data Splits\n\n\nThe dataset is split by dialect.\n\n\nDialect: Egyptian (EGY), Tweets: 350, Words: 7481\nDialect: Levantine (LEV), Tweets: 350, Words: 7221\nDialect: Gulf (GLF), Tweets: 350, Words: 6767\nDialect: Maghrebi (MGR), Tweets: 350, Words: 6400\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThis dataset was created to address the lack of computational resources available for dialects of Arabic. These dialects are typically used in speech, while written forms of the language are typically in Modern Standard Arabic. Social media, however, has provided a venue for people to use dialects in written format.", "### Source Data\n\n\nThis dataset builds off of the work of Eldesouki et al. (2017) and Samih et al. (2017b) who originally collected the tweets.", "#### Initial Data Collection and Normalization\n\n\nThey started with 175 million Arabic tweets returned by the Twitter API using the query \"lang:ar\" in March 2014. They then filtered this set using author-identified locations and tokens that are unique to each dialect. Finally, they had native speakers of each dialect select 350 tweets that were heavily accented.", "#### Who are the source language producers?\n\n\nThe source language producers are people who posted on Twitter in Arabic using dialectal words from countries where the dialects of interest were spoken, as identified in Mubarak and Darwish (2014).", "### Annotations", "#### Annotation process\n\n\nThe segmentation guidelines are available at URL The tagging guidelines are not provided, but Darwish at al. note that there were multiple rounds of quality control and revision.", "#### Who are the annotators?\n\n\nThe POS tags were annotated by native speakers of each dialect. Further information is not known.", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nDarwish et al find that the accuracy on the Maghrebi dataset suffered the most when the training set was from another dialect, and conversely training on Maghrebi yielded the worst results for all the other dialects. They suggest that Egyptian, Levantine, and Gulf may be more similar to each other and Maghrebi the most dissimilar to all of them. They also find that training on Modern Standard Arabic (MSA) and testing on dialects yielded significantly lower results compared to training on dialects and testing on MSA. This suggests that dialectal variation should be a significant consideration for future work in Arabic NLP applications, particularly when working with social media text.", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThis dataset was curated by Kareem Darwish, Hamdy Mubarak, Mohamed Eldesouki and Ahmed Abdelali with the Qatar Computing Research Institute (QCRI), Younes Samih and Laura Kallmeyer with the University of Dusseldorf, Randah Alharbi and Walid Magdy with the University of Edinburgh, and Mohammed Attia with Google. No funding information was included.", "### Licensing Information\n\n\nThis dataset is licensed under the Apache License, Version 2.0.\n\n\nKareem Darwish, Hamdy Mubarak, Ahmed Abdelali, Mohamed Eldesouki, Younes Samih, Randah Alharbi, Mohammed Attia, Walid Magdy and Laura Kallmeyer (2018) Multi-Dialect Arabic POS Tagging: A CRF Approach. Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), May 7-12, 2018. Miyazaki, Japan.", "### Contributions\n\n\nThanks to @mcmillanmajora for adding this dataset." ]
[ 101, 65, 89, 65, 19, 650, 100, 69, 41, 81, 48, 5, 40, 32, 18, 158, 8, 14, 91, 117, 20 ]
[ "passage: TAGS\n#task_categories-token-classification #task_ids-part-of-speech #annotations_creators-expert-generated #language_creators-found #multilinguality-multilingual #size_categories-n<1K #source_datasets-extended #language-Arabic #license-apache-2.0 #arxiv-1708.05891 #region-us \n### Dataset Summary\n\n\nThis dataset was created to support part of speech (POS) tagging in dialects of Arabic. It contains sets of 350 manually segmented and POS tagged tweets for each of four dialects: Egyptian, Levantine, Gulf, and Maghrebi.### Supported Tasks and Leaderboards\n\n\nThe dataset can be used to train a model for Arabic token segmentation and part of speech tagging in Arabic dialects. Success on this task is typically measured by achieving a high accuracy over a held out dataset. Darwish et al. (2018) train a CRF model across all four dialects and achieve an average accuracy of 89.3%.### Languages\n\n\nThe BCP-47 code is ar-Arab. The dataset consists of four dialects of Arabic, Egyptian (EGY), Levantine (LEV), Gulf (GLF), and Maghrebi (MGR), written in Arabic script.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nBelow is a partial example from the Egyptian set:", "passage: ### Data Fields\n\n\nThe 'fold' and the 'subfold' fields refer to the crossfold validation splits used by Darwish et al., which can be generated using this script.\n\n\n* 'fold': An int32 indicating which fold the instance was in for the crossfold validation\n* 'subfold': A string, either 'A' or 'B', indicating which subfold the instance was in for the crossfold validation\n* 'words': A sequence of strings of the unsegmented token\n* 'segments': A sequence of strings consisting of the segments of the word separated by '+' if there is more than one segment\n* 'pos\\_tags': A sequence of strings of the part of speech tags of the segments separated by '+' if there is more than one segment\n\n\nThe POS tags consist of a set developed by Darwish et al. (2017) for Modern Standard Arabic (MSA) plus an additional 6 tags (2 dialect-specific tags and 4 tweet-specific tags).\n\n\nTag: ADV, Purpose: MSA, Description: Adverb\nTag: ADJ, Purpose: MSA, Description: Adjective\nTag: CONJ, Purpose: MSA, Description: Conjunction\nTag: DET, Purpose: MSA, Description: Determiner\nTag: NOUN, Purpose: MSA, Description: Noun\nTag: NSUFF, Purpose: MSA, Description: Noun suffix\nTag: NUM, Purpose: MSA, Description: Number\nTag: PART, Purpose: MSA, Description: Particle\nTag: PREP, Purpose: MSA, Description: Preposition\nTag: PRON, Purpose: MSA, Description: Pronoun\nTag: PUNC, Purpose: MSA, Description: Preposition\nTag: V, Purpose: MSA, Description: Verb\nTag: ABBREV, Purpose: MSA, Description: Abbreviation\nTag: CASE, Purpose: MSA, Description: Alef of tanween fatha\nTag: JUS, Purpose: MSA, Description: Jussification attached to verbs\nTag: VSUFF, Purpose: MSA, Description: Verb Suffix\nTag: FOREIGN, Purpose: MSA, Description: Non-Arabic as well as non-MSA words\nTag: FUR\\_PART, Purpose: MSA, Description: Future particle \"s\" prefix and \"swf\"\nTag: PROG\\_PART, Purpose: Dialect, Description: Progressive particle\nTag: NEG\\_PART, Purpose: Dialect, Description: Negation particle\nTag: HASH, Purpose: Tweet, Description: Hashtag\nTag: EMOT, Purpose: Tweet, Description: Emoticon/Emoji\nTag: MENTION, Purpose: Tweet, Description: Mention\nTag: URL, Purpose: Tweet, Description: URL### Data Splits\n\n\nThe dataset is split by dialect.\n\n\nDialect: Egyptian (EGY), Tweets: 350, Words: 7481\nDialect: Levantine (LEV), Tweets: 350, Words: 7221\nDialect: Gulf (GLF), Tweets: 350, Words: 6767\nDialect: Maghrebi (MGR), Tweets: 350, Words: 6400\n\n\nDataset Creation\n----------------### Curation Rationale\n\n\nThis dataset was created to address the lack of computational resources available for dialects of Arabic. These dialects are typically used in speech, while written forms of the language are typically in Modern Standard Arabic. Social media, however, has provided a venue for people to use dialects in written format.### Source Data\n\n\nThis dataset builds off of the work of Eldesouki et al. (2017) and Samih et al. (2017b) who originally collected the tweets.#### Initial Data Collection and Normalization\n\n\nThey started with 175 million Arabic tweets returned by the Twitter API using the query \"lang:ar\" in March 2014. They then filtered this set using author-identified locations and tokens that are unique to each dialect. Finally, they had native speakers of each dialect select 350 tweets that were heavily accented.#### Who are the source language producers?\n\n\nThe source language producers are people who posted on Twitter in Arabic using dialectal words from countries where the dialects of interest were spoken, as identified in Mubarak and Darwish (2014).### Annotations#### Annotation process\n\n\nThe segmentation guidelines are available at URL The tagging guidelines are not provided, but Darwish at al. note that there were multiple rounds of quality control and revision.#### Who are the annotators?\n\n\nThe POS tags were annotated by native speakers of each dialect. Further information is not known.### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------" ]
5dd29493c034ff0859973fdc7cf69f00ffa50d26
# Dataset Card for Arabic Speech Corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Arabic Speech Corpus](http://en.arabicspeechcorpus.com/) - **Repository:** [Needs More Information] - **Paper:** [Modern standard Arabic phonetics for speech synthesis](http://en.arabicspeechcorpus.com/Nawar%20Halabi%20PhD%20Thesis%20Revised.pdf) - **Leaderboard:** [Paperswithcode Leaderboard][Needs More Information] - **Point of Contact:** [Nawar Halabi](mailto:nawar.halabi@gmail.com) ### Dataset Summary This Speech corpus has been developed as part of PhD work carried out by Nawar Halabi at the University of Southampton. The corpus was recorded in south Levantine Arabic (Damascian accent) using a professional studio. Synthesized speech as an output using this corpus has produced a high quality, natural voice. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages The audio is in Arabic. ## Dataset Structure ### Data Instances A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`. An example from the dataset is: ``` { 'file': '/Users/username/.cache/huggingface/datasets/downloads/extracted/baebe85e2cb67579f6f88e7117a87888c1ace390f4f14cb6c3e585c517ad9db0/arabic-speech-corpus/wav/ARA NORM 0002.wav', 'audio': {'path': '/Users/username/.cache/huggingface/datasets/downloads/extracted/baebe85e2cb67579f6f88e7117a87888c1ace390f4f14cb6c3e585c517ad9db0/arabic-speech-corpus/wav/ARA NORM 0002.wav', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 48000}, 'orthographic': 'waraj~aHa Alt~aqoriyru Al~a*iy >aEad~ahu maEohadu >aboHaA^i haDabapi Alt~ibiti fiy Alo>akaAdiymiy~api AlS~iyniy~api liloEuluwmi - >ano tasotamir~a darajaAtu AloHaraArapi wamusotawayaAtu Alr~uTuwbapi fiy Alo<irotifaAEi TawaAla ha*aA Aloqarono', 'phonetic': "sil w a r a' jj A H a tt A q r ii0' r u0 ll a * i0 < a E a' dd a h u0 m a' E h a d u0 < a b H aa' ^ i0 h A D A' b a t i0 tt i1' b t i0 f i0 l < a k aa d ii0 m ii0' y a t i0 SS II0 n ii0' y a t i0 l u0 l E u0 l uu0' m i0 sil < a' n t a s t a m i0' rr a d a r a j aa' t u0 l H a r aa' r a t i0 w a m u0 s t a w a y aa' t u0 rr U0 T UU0' b a t i0 f i0 l Ah i0 r t i0 f aa' E i0 T A' w A l a h aa' * a l q A' r n sil", 'text': '\ufeffwaraj~aHa Alt~aqoriyru Al~aTHiy >aEad~ahu maEohadu >aboHaA^i haDabapi Alt~ibiti fiy Alo>akaAdiymiy~api AlS~iyniy~api liloEuluwmi - >ano tasotamir~a darajaAtu AloHaraArapi wamusotawayaAtu Alr~uTuwbapi fiy Alo<irotifaAEi TawaAla haTHaA Aloqarono' } ``` ### Data Fields - file: A path to the downloaded audio file in .wav format. - audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. - text: the transcription of the audio file. - phonetic: the transcription in phonentics format. - orthographic: the transcriptions written in orthographic format. ### Data Splits | | Train | Test | | ----- | ----- | ---- | | dataset | 1813 | 100 | ## Dataset Creation ### Curation Rationale The corpus was created with Speech Synthesis as the main application in mind. Although it has been used as part of a larger corpus for speech recognition and speech denoising. Here are some explanations why the corpus was built the way it is: * Corpus size: Budget limitations and the research goal resulted in the decision not to gather more data. The goal was to show that high quality speech synthesis is possible with smaller corpora. * Phonetic diversity: Just like with many corpora, the phonetic diversity was acheived using greedy methods. Start with a core set of utterances and add more utterances which contribute to adding more phonetic diversity the most iterativly. The measure of diversity is based on the diphone frequency. * Content: News, sports, economics, fully diacritised content from the internet was gathered. The choice of utterances was random to avoid copyright issues. Because of corpus size, acheiving diversity of content type was difficult and was not the goal. * Non-sense utterances: The corpus contains a large set of utterances that are generated computationally to compensate for the diphones missing in the main part of the corpus. The usefullness of non-sense utterances was not proven in the PhD thesis. * The talent: The voice talent had a Syrian dialect from Damascus and spoke in formal Arabic. Please refer to [PhD thesis](#Citation-Information) for more detailed information. ### Source Data #### Initial Data Collection and Normalization News, sports, economics, fully diacritised content from the internet was gathered. The choice of utterances was random to avoid copyright issues. Because of corpus size, acheiving diversity of content type was difficult and was not the goal. We were restricted to content which was fully diacritised to make the annotation process easier. Just like with many corpora, the phonetic diversity was acheived using greedy methods. Start with a core set of utterances and add more utterances which contribute to adding more phonetic diversity the most iterativly. The measure of diversity is based on the diphone frequency. Please refer to [PhD thesis](#Citation-Information). #### Who are the source language producers? Please refer to [PhD thesis](#Citation-Information). ### Annotations #### Annotation process Three annotators aligned audio with phonemes with the help of HTK forced alignment. They worked on overlapping parts as well to assess annotator agreement and the quality of the annotations. The entire corpus was checked by human annotators. Please refer to [PhD thesis](#Citation-Information). #### Who are the annotators? Nawar Halabi and two anonymous Arabic language teachers. ### Personal and Sensitive Information The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset. The voice talent agreed in writing for their voice to be used in speech technologies as long as they stay anonymous. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators The corpus was recorded in south Levantine Arabic (Damascian accent) using a professional studio by Nawar Halabi. ### Licensing Information [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) ### Citation Information ``` @phdthesis{halabi2016modern, title={Modern standard Arabic phonetics for speech synthesis}, author={Halabi, Nawar}, year={2016}, school={University of Southampton} } ``` ### Contributions This dataset was created by: * Nawar Halabi [@nawarhalabi](https://github.com/nawarhalabi) main creator and annotator. * Two anonymous Arabic langauge teachers as annotators. * One anonymous voice talent. * Thanks to [@zaidalyafeai](https://github.com/zaidalyafeai) for adding this dataset.
arabic_speech_corpus
[ "task_categories:automatic-speech-recognition", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:ar", "license:cc-by-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced"], "language": ["ar"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["automatic-speech-recognition"], "task_ids": [], "paperswithcode_id": "arabic-speech-corpus", "pretty_name": "Arabic Speech Corpus", "dataset_info": {"features": [{"name": "file", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "phonetic", "dtype": "string"}, {"name": "orthographic", "dtype": "string"}], "config_name": "clean", "splits": [{"name": "train", "num_bytes": 1002365, "num_examples": 1813}, {"name": "test", "num_bytes": 65784, "num_examples": 100}], "download_size": 1192302846, "dataset_size": 1068149}, "train-eval-index": [{"config": "clean", "task": "automatic-speech-recognition", "task_id": "speech_recognition", "splits": {"train_split": "train", "eval_split": "test"}, "col_mapping": {"file": "path", "text": "text"}, "metrics": [{"type": "wer", "name": "WER"}, {"type": "cer", "name": "CER"}]}]}
2024-01-18T11:01:49+00:00
[]
[ "ar" ]
TAGS #task_categories-automatic-speech-recognition #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Arabic #license-cc-by-4.0 #region-us
Dataset Card for Arabic Speech Corpus ===================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: Arabic Speech Corpus * Repository: * Paper: Modern standard Arabic phonetics for speech synthesis * Leaderboard: [Paperswithcode Leaderboard] * Point of Contact: Nawar Halabi ### Dataset Summary This Speech corpus has been developed as part of PhD work carried out by Nawar Halabi at the University of Southampton. The corpus was recorded in south Levantine Arabic (Damascian accent) using a professional studio. Synthesized speech as an output using this corpus has produced a high quality, natural voice. ### Supported Tasks and Leaderboards ### Languages The audio is in Arabic. Dataset Structure ----------------- ### Data Instances A typical data point comprises the path to the audio file, usually called 'file' and its transcription, called 'text'. An example from the dataset is: ### Data Fields * file: A path to the downloaded audio file in .wav format. * audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0]["audio"]' the audio file is automatically decoded and resampled to 'dataset.features["audio"].sampling\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '"audio"' column, *i.e.* 'dataset[0]["audio"]' should always be preferred over 'dataset["audio"][0]'. * text: the transcription of the audio file. * phonetic: the transcription in phonentics format. * orthographic: the transcriptions written in orthographic format. ### Data Splits Train: dataset, Test: 1813 Dataset Creation ---------------- ### Curation Rationale The corpus was created with Speech Synthesis as the main application in mind. Although it has been used as part of a larger corpus for speech recognition and speech denoising. Here are some explanations why the corpus was built the way it is: * Corpus size: Budget limitations and the research goal resulted in the decision not to gather more data. The goal was to show that high quality speech synthesis is possible with smaller corpora. * Phonetic diversity: Just like with many corpora, the phonetic diversity was acheived using greedy methods. Start with a core set of utterances and add more utterances which contribute to adding more phonetic diversity the most iterativly. The measure of diversity is based on the diphone frequency. * Content: News, sports, economics, fully diacritised content from the internet was gathered. The choice of utterances was random to avoid copyright issues. Because of corpus size, acheiving diversity of content type was difficult and was not the goal. * Non-sense utterances: The corpus contains a large set of utterances that are generated computationally to compensate for the diphones missing in the main part of the corpus. The usefullness of non-sense utterances was not proven in the PhD thesis. * The talent: The voice talent had a Syrian dialect from Damascus and spoke in formal Arabic. Please refer to PhD thesis for more detailed information. ### Source Data #### Initial Data Collection and Normalization News, sports, economics, fully diacritised content from the internet was gathered. The choice of utterances was random to avoid copyright issues. Because of corpus size, acheiving diversity of content type was difficult and was not the goal. We were restricted to content which was fully diacritised to make the annotation process easier. Just like with many corpora, the phonetic diversity was acheived using greedy methods. Start with a core set of utterances and add more utterances which contribute to adding more phonetic diversity the most iterativly. The measure of diversity is based on the diphone frequency. Please refer to PhD thesis. #### Who are the source language producers? Please refer to PhD thesis. ### Annotations #### Annotation process Three annotators aligned audio with phonemes with the help of HTK forced alignment. They worked on overlapping parts as well to assess annotator agreement and the quality of the annotations. The entire corpus was checked by human annotators. Please refer to PhD thesis. #### Who are the annotators? Nawar Halabi and two anonymous Arabic language teachers. ### Personal and Sensitive Information The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset. The voice talent agreed in writing for their voice to be used in speech technologies as long as they stay anonymous. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators The corpus was recorded in south Levantine Arabic (Damascian accent) using a professional studio by Nawar Halabi. ### Licensing Information CC BY 4.0 ### Contributions This dataset was created by: * Nawar Halabi @nawarhalabi main creator and annotator. * Two anonymous Arabic langauge teachers as annotators. * One anonymous voice talent. * Thanks to @zaidalyafeai for adding this dataset.
[ "### Dataset Summary\n\n\nThis Speech corpus has been developed as part of PhD work carried out by Nawar Halabi at the University of Southampton. The corpus was recorded in south Levantine Arabic (Damascian accent) using a professional studio. Synthesized speech as an output using this corpus has produced a high quality, natural voice.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe audio is in Arabic.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical data point comprises the path to the audio file, usually called 'file' and its transcription, called 'text'.\nAn example from the dataset is:", "### Data Fields\n\n\n* file: A path to the downloaded audio file in .wav format.\n* audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling\\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n* text: the transcription of the audio file.\n* phonetic: the transcription in phonentics format.\n* orthographic: the transcriptions written in orthographic format.", "### Data Splits\n\n\nTrain: dataset, Test: 1813\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe corpus was created with Speech Synthesis as the main application in mind. Although it has been used as part of a larger corpus for speech recognition and speech denoising. Here are some explanations why the corpus was built the way it is:\n\n\n* Corpus size: Budget limitations and the research goal resulted in the decision not to gather more data. The goal was to show that high quality speech synthesis is possible with smaller corpora.\n* Phonetic diversity: Just like with many corpora, the phonetic diversity was acheived using greedy methods. Start with a core set of utterances and add more utterances which contribute to adding more phonetic diversity the most iterativly. The measure of diversity is based on the diphone frequency.\n* Content: News, sports, economics, fully diacritised content from the internet was gathered. The choice of utterances was random to avoid copyright issues. Because of corpus size, acheiving diversity of content type was difficult and was not the goal.\n* Non-sense utterances: The corpus contains a large set of utterances that are generated computationally to compensate for the diphones missing in the main part of the corpus. The usefullness of non-sense utterances was not proven in the PhD thesis.\n* The talent: The voice talent had a Syrian dialect from Damascus and spoke in formal Arabic.\n\n\nPlease refer to PhD thesis for more detailed information.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nNews, sports, economics, fully diacritised content from the internet was gathered. The choice of utterances was random to avoid copyright issues. Because of corpus size, acheiving diversity of content type was difficult and was not the goal. We were restricted to content which was fully diacritised to make the annotation process easier.\n\n\nJust like with many corpora, the phonetic diversity was acheived using greedy methods. Start with a core set of utterances and add more utterances which contribute to adding more phonetic diversity the most iterativly. The measure of diversity is based on the diphone frequency.\n\n\nPlease refer to PhD thesis.", "#### Who are the source language producers?\n\n\nPlease refer to PhD thesis.", "### Annotations", "#### Annotation process\n\n\nThree annotators aligned audio with phonemes with the help of HTK forced alignment. They worked on overlapping parts as well to assess annotator agreement and the quality of the annotations. The entire corpus was checked by human annotators.\n\n\nPlease refer to PhD thesis.", "#### Who are the annotators?\n\n\nNawar Halabi and two anonymous Arabic language teachers.", "### Personal and Sensitive Information\n\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset. The voice talent agreed in writing for their voice to be used in speech technologies as long as they stay anonymous.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe corpus was recorded in south Levantine Arabic (Damascian accent) using a professional studio by Nawar Halabi.", "### Licensing Information\n\n\nCC BY 4.0", "### Contributions\n\n\nThis dataset was created by:\n\n\n* Nawar Halabi @nawarhalabi main creator and annotator.\n* Two anonymous Arabic langauge teachers as annotators.\n* One anonymous voice talent.\n* Thanks to @zaidalyafeai for adding this dataset." ]
[ "TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Arabic #license-cc-by-4.0 #region-us \n", "### Dataset Summary\n\n\nThis Speech corpus has been developed as part of PhD work carried out by Nawar Halabi at the University of Southampton. The corpus was recorded in south Levantine Arabic (Damascian accent) using a professional studio. Synthesized speech as an output using this corpus has produced a high quality, natural voice.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe audio is in Arabic.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical data point comprises the path to the audio file, usually called 'file' and its transcription, called 'text'.\nAn example from the dataset is:", "### Data Fields\n\n\n* file: A path to the downloaded audio file in .wav format.\n* audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling\\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n* text: the transcription of the audio file.\n* phonetic: the transcription in phonentics format.\n* orthographic: the transcriptions written in orthographic format.", "### Data Splits\n\n\nTrain: dataset, Test: 1813\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe corpus was created with Speech Synthesis as the main application in mind. Although it has been used as part of a larger corpus for speech recognition and speech denoising. Here are some explanations why the corpus was built the way it is:\n\n\n* Corpus size: Budget limitations and the research goal resulted in the decision not to gather more data. The goal was to show that high quality speech synthesis is possible with smaller corpora.\n* Phonetic diversity: Just like with many corpora, the phonetic diversity was acheived using greedy methods. Start with a core set of utterances and add more utterances which contribute to adding more phonetic diversity the most iterativly. The measure of diversity is based on the diphone frequency.\n* Content: News, sports, economics, fully diacritised content from the internet was gathered. The choice of utterances was random to avoid copyright issues. Because of corpus size, acheiving diversity of content type was difficult and was not the goal.\n* Non-sense utterances: The corpus contains a large set of utterances that are generated computationally to compensate for the diphones missing in the main part of the corpus. The usefullness of non-sense utterances was not proven in the PhD thesis.\n* The talent: The voice talent had a Syrian dialect from Damascus and spoke in formal Arabic.\n\n\nPlease refer to PhD thesis for more detailed information.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nNews, sports, economics, fully diacritised content from the internet was gathered. The choice of utterances was random to avoid copyright issues. Because of corpus size, acheiving diversity of content type was difficult and was not the goal. We were restricted to content which was fully diacritised to make the annotation process easier.\n\n\nJust like with many corpora, the phonetic diversity was acheived using greedy methods. Start with a core set of utterances and add more utterances which contribute to adding more phonetic diversity the most iterativly. The measure of diversity is based on the diphone frequency.\n\n\nPlease refer to PhD thesis.", "#### Who are the source language producers?\n\n\nPlease refer to PhD thesis.", "### Annotations", "#### Annotation process\n\n\nThree annotators aligned audio with phonemes with the help of HTK forced alignment. They worked on overlapping parts as well to assess annotator agreement and the quality of the annotations. The entire corpus was checked by human annotators.\n\n\nPlease refer to PhD thesis.", "#### Who are the annotators?\n\n\nNawar Halabi and two anonymous Arabic language teachers.", "### Personal and Sensitive Information\n\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset. The voice talent agreed in writing for their voice to be used in speech technologies as long as they stay anonymous.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe corpus was recorded in south Levantine Arabic (Damascian accent) using a professional studio by Nawar Halabi.", "### Licensing Information\n\n\nCC BY 4.0", "### Contributions\n\n\nThis dataset was created by:\n\n\n* Nawar Halabi @nawarhalabi main creator and annotator.\n* Two anonymous Arabic langauge teachers as annotators.\n* One anonymous voice talent.\n* Thanks to @zaidalyafeai for adding this dataset." ]
[ 88, 72, 10, 17, 42, 236, 20, 320, 4, 155, 16, 5, 68, 21, 73, 7, 8, 14, 33, 9, 65 ]
[ "passage: TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Arabic #license-cc-by-4.0 #region-us \n### Dataset Summary\n\n\nThis Speech corpus has been developed as part of PhD work carried out by Nawar Halabi at the University of Southampton. The corpus was recorded in south Levantine Arabic (Damascian accent) using a professional studio. Synthesized speech as an output using this corpus has produced a high quality, natural voice.### Supported Tasks and Leaderboards### Languages\n\n\nThe audio is in Arabic.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nA typical data point comprises the path to the audio file, usually called 'file' and its transcription, called 'text'.\nAn example from the dataset is:### Data Fields\n\n\n* file: A path to the downloaded audio file in .wav format.\n* audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling\\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n* text: the transcription of the audio file.\n* phonetic: the transcription in phonentics format.\n* orthographic: the transcriptions written in orthographic format.### Data Splits\n\n\nTrain: dataset, Test: 1813\n\n\nDataset Creation\n----------------" ]
cc6906b6eda547e4ffc63b8d88ccca7e0515187a
# Dataset Card for "arcd" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/husseinmozannar/SOQAL/tree/master/data](https://github.com/husseinmozannar/SOQAL/tree/master/data) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 1.94 MB - **Size of the generated dataset:** 1.70 MB - **Total amount of disk used:** 3.64 MB ### Dataset Summary Arabic Reading Comprehension Dataset (ARCD) composed of 1,395 questions posed by crowdworkers on Wikipedia articles. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### plain_text - **Size of downloaded dataset files:** 1.94 MB - **Size of the generated dataset:** 1.70 MB - **Total amount of disk used:** 3.64 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "answers": "{\"answer_start\": [34], \"text\": [\"صحابي من صحابة رسول الإسلام محمد، وعمُّه وأخوه من الرضاعة وأحد وزرائه الأربعة عشر،\"]}...", "context": "\"حمزة بن عبد المطلب الهاشمي القرشي صحابي من صحابة رسول الإسلام محمد، وعمُّه وأخوه من الرضاعة وأحد وزرائه الأربعة عشر، وهو خير أع...", "id": "621723207492", "question": "من هو حمزة بن عبد المطلب؟", "title": "حمزة بن عبد المطلب" } ``` ### Data Fields The data fields are the same among all splits. #### plain_text - `id`: a `string` feature. - `title`: a `string` feature. - `context`: a `string` feature. - `question`: a `string` feature. - `answers`: a dictionary feature containing: - `text`: a `string` feature. - `answer_start`: a `int32` feature. ### Data Splits | name | train | validation | | ---------- | ----: | ---------: | | plain_text | 693 | 702 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{mozannar-etal-2019-neural, title = "Neural {A}rabic Question Answering", author = "Mozannar, Hussein and Maamary, Elie and El Hajal, Karl and Hajj, Hazem", booktitle = "Proceedings of the Fourth Arabic Natural Language Processing Workshop", month = aug, year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/W19-4612", doi = "10.18653/v1/W19-4612", pages = "108--118", abstract = "This paper tackles the problem of open domain factual Arabic question answering (QA) using Wikipedia as our knowledge source. This constrains the answer of any question to be a span of text in Wikipedia. Open domain QA for Arabic entails three challenges: annotated QA datasets in Arabic, large scale efficient information retrieval and machine reading comprehension. To deal with the lack of Arabic QA datasets we present the Arabic Reading Comprehension Dataset (ARCD) composed of 1,395 questions posed by crowdworkers on Wikipedia articles, and a machine translation of the Stanford Question Answering Dataset (Arabic-SQuAD). Our system for open domain question answering in Arabic (SOQAL) is based on two components: (1) a document retriever using a hierarchical TF-IDF approach and (2) a neural reading comprehension model using the pre-trained bi-directional transformer BERT. Our experiments on ARCD indicate the effectiveness of our approach with our BERT-based reader achieving a 61.3 F1 score, and our open domain system SOQAL achieving a 27.6 F1 score.", } ``` ### Contributions Thanks to [@albertvillanova](https://github.com/albertvillanova), [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@tayciryahmed](https://github.com/tayciryahmed) for adding this dataset.
arcd
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:ar", "license:mit", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["ar"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"], "paperswithcode_id": "arcd", "pretty_name": "ARCD", "language_bcp47": ["ar-SA"], "dataset_info": {"config_name": "plain_text", "features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "splits": [{"name": "train", "num_bytes": 811036, "num_examples": 693}, {"name": "validation", "num_bytes": 885620, "num_examples": 702}], "download_size": 365858, "dataset_size": 1696656}, "configs": [{"config_name": "plain_text", "data_files": [{"split": "train", "path": "plain_text/train-*"}, {"split": "validation", "path": "plain_text/validation-*"}], "default": true}]}
2024-01-09T12:44:24+00:00
[]
[ "ar" ]
TAGS #task_categories-question-answering #task_ids-extractive-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Arabic #license-mit #region-us
Dataset Card for "arcd" ======================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: * Point of Contact: * Size of downloaded dataset files: 1.94 MB * Size of the generated dataset: 1.70 MB * Total amount of disk used: 3.64 MB ### Dataset Summary Arabic Reading Comprehension Dataset (ARCD) composed of 1,395 questions posed by crowdworkers on Wikipedia articles. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### plain\_text * Size of downloaded dataset files: 1.94 MB * Size of the generated dataset: 1.70 MB * Total amount of disk used: 3.64 MB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### plain\_text * 'id': a 'string' feature. * 'title': a 'string' feature. * 'context': a 'string' feature. * 'question': a 'string' feature. * 'answers': a dictionary feature containing: + 'text': a 'string' feature. + 'answer\_start': a 'int32' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @albertvillanova, @lewtun, @mariamabarham, @thomwolf, @tayciryahmed for adding this dataset.
[ "### Dataset Summary\n\n\nArabic Reading Comprehension Dataset (ARCD) composed of 1,395 questions posed by crowdworkers on Wikipedia articles.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### plain\\_text\n\n\n* Size of downloaded dataset files: 1.94 MB\n* Size of the generated dataset: 1.70 MB\n* Total amount of disk used: 3.64 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### plain\\_text\n\n\n* 'id': a 'string' feature.\n* 'title': a 'string' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @albertvillanova, @lewtun, @mariamabarham, @thomwolf, @tayciryahmed for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_ids-extractive-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Arabic #license-mit #region-us \n", "### Dataset Summary\n\n\nArabic Reading Comprehension Dataset (ARCD) composed of 1,395 questions posed by crowdworkers on Wikipedia articles.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### plain\\_text\n\n\n* Size of downloaded dataset files: 1.94 MB\n* Size of the generated dataset: 1.70 MB\n* Total amount of disk used: 3.64 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### plain\\_text\n\n\n* 'id': a 'string' feature.\n* 'title': a 'string' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @albertvillanova, @lewtun, @mariamabarham, @thomwolf, @tayciryahmed for adding this dataset." ]
[ 91, 32, 10, 11, 6, 52, 17, 93, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 39 ]
[ "passage: TAGS\n#task_categories-question-answering #task_ids-extractive-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Arabic #license-mit #region-us \n### Dataset Summary\n\n\nArabic Reading Comprehension Dataset (ARCD) composed of 1,395 questions posed by crowdworkers on Wikipedia articles.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### plain\\_text\n\n\n* Size of downloaded dataset files: 1.94 MB\n* Size of the generated dataset: 1.70 MB\n* Total amount of disk used: 3.64 MB\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### plain\\_text\n\n\n* 'id': a 'string' feature.\n* 'title': a 'string' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators### Licensing Information### Contributions\n\n\nThanks to @albertvillanova, @lewtun, @mariamabarham, @thomwolf, @tayciryahmed for adding this dataset." ]
ce4d032917566e486a90330392bc7853280e7249
# Dataset Card for ArSenTD-LEV ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [ArSenTD-LEV homepage](http://oma-project.com/) - **Paper:** [ArSentD-LEV: A Multi-Topic Corpus for Target-based Sentiment Analysis in Arabic Levantine Tweets](https://arxiv.org/abs/1906.01830) ### Dataset Summary The Arabic Sentiment Twitter Dataset for Levantine dialect (ArSenTD-LEV) contains 4,000 tweets written in Arabic and equally retrieved from Jordan, Lebanon, Palestine and Syria. ### Supported Tasks and Leaderboards Sentriment analysis ### Languages Arabic Levantine Dualect ## Dataset Structure ### Data Instances {'Country': 0, 'Sentiment': 3, 'Sentiment_Expression': 0, 'Sentiment_Target': 'هاي سوالف عصابات ارهابية', 'Topic': 'politics', 'Tweet': 'ثلاث تفجيرات في #كركوك الحصيلة قتيل و 16 جريح بدأت اكلاوات كركوك كانت امان قبل دخول القوات العراقية ، هاي سوالف عصابات ارهابية'} ### Data Fields `Tweet`: the text content of the tweet \ `Country`: the country from which the tweet was collected ('jordan', 'lebanon', 'syria', 'palestine')\ `Topic`: the topic being discussed in the tweet (personal, politics, religion, sports, entertainment and others) \ `Sentiment`: the overall sentiment expressed in the tweet (very_negative, negative, neutral, positive and very_positive) \ `Sentiment_Expression`: the way how the sentiment was expressed: explicit, implicit, or none (the latter when sentiment is neutral) \ `Sentiment_Target`: the segment from the tweet to which sentiment is expressed. If sentiment is neutral, this field takes the 'none' value. ### Data Splits No standard splits are provided ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Make sure to read and agree to the [license](http://oma-project.com/ArSenL/ArSenTD_Lev_Intro) ### Citation Information ``` @article{baly2019arsentd, title={Arsentd-lev: A multi-topic corpus for target-based sentiment analysis in arabic levantine tweets}, author={Baly, Ramy and Khaddaj, Alaa and Hajj, Hazem and El-Hajj, Wassim and Shaban, Khaled Bashir}, journal={arXiv preprint arXiv:1906.01830}, year={2019} } ``` ### Contributions Thanks to [@moussaKam](https://github.com/moussaKam) for adding this dataset.
arsentd_lev
[ "task_categories:text-classification", "task_ids:sentiment-classification", "task_ids:topic-classification", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:apc", "language:ajp", "license:other", "arxiv:1906.01830", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["apc", "ajp"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification", "topic-classification"], "paperswithcode_id": "arsentd-lev", "pretty_name": "ArSenTD-LEV", "dataset_info": {"features": [{"name": "Tweet", "dtype": "string"}, {"name": "Country", "dtype": {"class_label": {"names": {"0": "jordan", "1": "lebanon", "2": "syria", "3": "palestine"}}}}, {"name": "Topic", "dtype": "string"}, {"name": "Sentiment", "dtype": {"class_label": {"names": {"0": "negative", "1": "neutral", "2": "positive", "3": "very_negative", "4": "very_positive"}}}}, {"name": "Sentiment_Expression", "dtype": {"class_label": {"names": {"0": "explicit", "1": "implicit", "2": "none"}}}}, {"name": "Sentiment_Target", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1233980, "num_examples": 4000}], "download_size": 392666, "dataset_size": 1233980}}
2024-01-18T11:01:50+00:00
[ "1906.01830" ]
[ "apc", "ajp" ]
TAGS #task_categories-text-classification #task_ids-sentiment-classification #task_ids-topic-classification #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Levantine Arabic #language-South Levantine Arabic #license-other #arxiv-1906.01830 #region-us
# Dataset Card for ArSenTD-LEV ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: ArSenTD-LEV homepage - Paper: ArSentD-LEV: A Multi-Topic Corpus for Target-based Sentiment Analysis in Arabic Levantine Tweets ### Dataset Summary The Arabic Sentiment Twitter Dataset for Levantine dialect (ArSenTD-LEV) contains 4,000 tweets written in Arabic and equally retrieved from Jordan, Lebanon, Palestine and Syria. ### Supported Tasks and Leaderboards Sentriment analysis ### Languages Arabic Levantine Dualect ## Dataset Structure ### Data Instances {'Country': 0, 'Sentiment': 3, 'Sentiment_Expression': 0, 'Sentiment_Target': 'هاي سوالف عصابات ارهابية', 'Topic': 'politics', 'Tweet': 'ثلاث تفجيرات في #كركوك الحصيلة قتيل و 16 جريح بدأت اكلاوات كركوك كانت امان قبل دخول القوات العراقية ، هاي سوالف عصابات ارهابية'} ### Data Fields 'Tweet': the text content of the tweet \ 'Country': the country from which the tweet was collected ('jordan', 'lebanon', 'syria', 'palestine')\ 'Topic': the topic being discussed in the tweet (personal, politics, religion, sports, entertainment and others) \ 'Sentiment': the overall sentiment expressed in the tweet (very_negative, negative, neutral, positive and very_positive) \ 'Sentiment_Expression': the way how the sentiment was expressed: explicit, implicit, or none (the latter when sentiment is neutral) \ 'Sentiment_Target': the segment from the tweet to which sentiment is expressed. If sentiment is neutral, this field takes the 'none' value. ### Data Splits No standard splits are provided ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information Make sure to read and agree to the license ### Contributions Thanks to @moussaKam for adding this dataset.
[ "# Dataset Card for ArSenTD-LEV", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: ArSenTD-LEV homepage\n- Paper: ArSentD-LEV: A Multi-Topic Corpus for Target-based Sentiment Analysis in Arabic Levantine Tweets", "### Dataset Summary\n\nThe Arabic Sentiment Twitter Dataset for Levantine dialect (ArSenTD-LEV) contains 4,000 tweets written in Arabic and equally retrieved from Jordan, Lebanon, Palestine and Syria.", "### Supported Tasks and Leaderboards\n\nSentriment analysis", "### Languages\n\nArabic Levantine Dualect", "## Dataset Structure", "### Data Instances\n\n{'Country': 0,\n 'Sentiment': 3,\n 'Sentiment_Expression': 0,\n 'Sentiment_Target': 'هاي سوالف عصابات ارهابية',\n 'Topic': 'politics',\n 'Tweet': 'ثلاث تفجيرات في #كركوك الحصيلة قتيل و 16 جريح بدأت اكلاوات كركوك كانت امان قبل دخول القوات العراقية ، هاي سوالف عصابات ارهابية'}", "### Data Fields\n\n'Tweet': the text content of the tweet \\\n'Country': the country from which the tweet was collected ('jordan', 'lebanon', 'syria', 'palestine')\\\n'Topic': the topic being discussed in the tweet (personal, politics, religion, sports, entertainment and others) \\\n'Sentiment': the overall sentiment expressed in the tweet (very_negative, negative, neutral, positive and very_positive) \\\n'Sentiment_Expression': the way how the sentiment was expressed: explicit, implicit, or none (the latter when sentiment is neutral) \\\n'Sentiment_Target': the segment from the tweet to which sentiment is expressed. If sentiment is neutral, this field takes the 'none' value.", "### Data Splits\n\nNo standard splits are provided", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nMake sure to read and agree to the license", "### Contributions\n\nThanks to @moussaKam for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #task_ids-topic-classification #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Levantine Arabic #language-South Levantine Arabic #license-other #arxiv-1906.01830 #region-us \n", "# Dataset Card for ArSenTD-LEV", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: ArSenTD-LEV homepage\n- Paper: ArSentD-LEV: A Multi-Topic Corpus for Target-based Sentiment Analysis in Arabic Levantine Tweets", "### Dataset Summary\n\nThe Arabic Sentiment Twitter Dataset for Levantine dialect (ArSenTD-LEV) contains 4,000 tweets written in Arabic and equally retrieved from Jordan, Lebanon, Palestine and Syria.", "### Supported Tasks and Leaderboards\n\nSentriment analysis", "### Languages\n\nArabic Levantine Dualect", "## Dataset Structure", "### Data Instances\n\n{'Country': 0,\n 'Sentiment': 3,\n 'Sentiment_Expression': 0,\n 'Sentiment_Target': 'هاي سوالف عصابات ارهابية',\n 'Topic': 'politics',\n 'Tweet': 'ثلاث تفجيرات في #كركوك الحصيلة قتيل و 16 جريح بدأت اكلاوات كركوك كانت امان قبل دخول القوات العراقية ، هاي سوالف عصابات ارهابية'}", "### Data Fields\n\n'Tweet': the text content of the tweet \\\n'Country': the country from which the tweet was collected ('jordan', 'lebanon', 'syria', 'palestine')\\\n'Topic': the topic being discussed in the tweet (personal, politics, religion, sports, entertainment and others) \\\n'Sentiment': the overall sentiment expressed in the tweet (very_negative, negative, neutral, positive and very_positive) \\\n'Sentiment_Expression': the way how the sentiment was expressed: explicit, implicit, or none (the latter when sentiment is neutral) \\\n'Sentiment_Target': the segment from the tweet to which sentiment is expressed. If sentiment is neutral, this field takes the 'none' value.", "### Data Splits\n\nNo standard splits are provided", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nMake sure to read and agree to the license", "### Contributions\n\nThanks to @moussaKam for adding this dataset." ]
[ 115, 10, 120, 45, 50, 14, 10, 6, 111, 191, 11, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 15, 17 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #task_ids-topic-classification #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Levantine Arabic #language-South Levantine Arabic #license-other #arxiv-1906.01830 #region-us \n# Dataset Card for ArSenTD-LEV## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: ArSenTD-LEV homepage\n- Paper: ArSentD-LEV: A Multi-Topic Corpus for Target-based Sentiment Analysis in Arabic Levantine Tweets### Dataset Summary\n\nThe Arabic Sentiment Twitter Dataset for Levantine dialect (ArSenTD-LEV) contains 4,000 tweets written in Arabic and equally retrieved from Jordan, Lebanon, Palestine and Syria.### Supported Tasks and Leaderboards\n\nSentriment analysis### Languages\n\nArabic Levantine Dualect## Dataset Structure### Data Instances\n\n{'Country': 0,\n 'Sentiment': 3,\n 'Sentiment_Expression': 0,\n 'Sentiment_Target': 'هاي سوالف عصابات ارهابية',\n 'Topic': 'politics',\n 'Tweet': 'ثلاث تفجيرات في #كركوك الحصيلة قتيل و 16 جريح بدأت اكلاوات كركوك كانت امان قبل دخول القوات العراقية ، هاي سوالف عصابات ارهابية'}" ]
df6c96ba77462a86dc1cf530c12a69da47ea42e7
# Dataset Card for "art" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://leaderboard.allenai.org/anli/submissions/get-started](https://leaderboard.allenai.org/anli/submissions/get-started) - **Repository:** https://github.com/allenai/abductive-commonsense-reasoning - **Paper:** [Abductive Commonsense Reasoning](https://arxiv.org/abs/1908.05739) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 5.12 MB - **Size of the generated dataset:** 34.36 MB - **Total amount of disk used:** 39.48 MB ### Dataset Summary ART consists of over 20k commonsense narrative contexts and 200k explanations. The Abductive Natural Language Inference Dataset from AI2. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### anli - **Size of downloaded dataset files:** 5.12 MB - **Size of the generated dataset:** 34.36 MB - **Total amount of disk used:** 39.48 MB An example of 'train' looks as follows. ``` { "hypothesis_1": "Chad's car had all sorts of other problems besides alignment.", "hypothesis_2": "Chad's car had all sorts of benefits other than being sexy.", "label": 1, "observation_1": "Chad went to get the wheel alignment measured on his car.", "observation_2": "The mechanic provided a working alignment with new body work." } ``` ### Data Fields The data fields are the same among all splits. #### anli - `observation_1`: a `string` feature. - `observation_2`: a `string` feature. - `hypothesis_1`: a `string` feature. - `hypothesis_2`: a `string` feature. - `label`: a classification label, with possible values including `0` (0), `1` (1), `2` (2). ### Data Splits |name|train |validation| |----|-----:|---------:| |anli|169654| 1532| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{Bhagavatula2020Abductive, title={Abductive Commonsense Reasoning}, author={Chandra Bhagavatula and Ronan Le Bras and Chaitanya Malaviya and Keisuke Sakaguchi and Ari Holtzman and Hannah Rashkin and Doug Downey and Wen-tau Yih and Yejin Choi}, booktitle={International Conference on Learning Representations}, year={2020}, url={https://openreview.net/forum?id=Byg1v1HKDB} } ``` ### Contributions Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
art
[ "task_categories:multiple-choice", "task_categories:text-classification", "task_ids:natural-language-inference", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:unknown", "abductive-natural-language-inference", "arxiv:1908.05739", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["multiple-choice", "text-classification"], "task_ids": ["natural-language-inference"], "paperswithcode_id": "art-dataset", "pretty_name": "Abductive Reasoning in narrative Text", "tags": ["abductive-natural-language-inference"], "dataset_info": {"config_name": "anli", "features": [{"name": "observation_1", "dtype": "string"}, {"name": "observation_2", "dtype": "string"}, {"name": "hypothesis_1", "dtype": "string"}, {"name": "hypothesis_2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1", "2": "2"}}}}], "splits": [{"name": "validation", "num_bytes": 311146, "num_examples": 1532}, {"name": "train", "num_bytes": 33918790, "num_examples": 169654}], "download_size": 9191805, "dataset_size": 34229936}, "configs": [{"config_name": "anli", "data_files": [{"split": "validation", "path": "anli/validation-*"}, {"split": "train", "path": "anli/train-*"}], "default": true}]}
2024-01-09T12:45:10+00:00
[ "1908.05739" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-text-classification #task_ids-natural-language-inference #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-unknown #abductive-natural-language-inference #arxiv-1908.05739 #region-us
Dataset Card for "art" ====================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: Abductive Commonsense Reasoning * Point of Contact: * Size of downloaded dataset files: 5.12 MB * Size of the generated dataset: 34.36 MB * Total amount of disk used: 39.48 MB ### Dataset Summary ART consists of over 20k commonsense narrative contexts and 200k explanations. The Abductive Natural Language Inference Dataset from AI2. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### anli * Size of downloaded dataset files: 5.12 MB * Size of the generated dataset: 34.36 MB * Total amount of disk used: 39.48 MB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### anli * 'observation\_1': a 'string' feature. * 'observation\_2': a 'string' feature. * 'hypothesis\_1': a 'string' feature. * 'hypothesis\_2': a 'string' feature. * 'label': a classification label, with possible values including '0' (0), '1' (1), '2' (2). ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @patrickvonplaten, @thomwolf, @mariamabarham, @lewtun, @lhoestq for adding this dataset.
[ "### Dataset Summary\n\n\nART consists of over 20k commonsense narrative contexts and 200k explanations.\n\n\nThe Abductive Natural Language Inference Dataset from AI2.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### anli\n\n\n* Size of downloaded dataset files: 5.12 MB\n* Size of the generated dataset: 34.36 MB\n* Total amount of disk used: 39.48 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### anli\n\n\n* 'observation\\_1': a 'string' feature.\n* 'observation\\_2': a 'string' feature.\n* 'hypothesis\\_1': a 'string' feature.\n* 'hypothesis\\_2': a 'string' feature.\n* 'label': a classification label, with possible values including '0' (0), '1' (1), '2' (2).", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @patrickvonplaten, @thomwolf, @mariamabarham, @lewtun, @lhoestq for adding this dataset." ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-text-classification #task_ids-natural-language-inference #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-unknown #abductive-natural-language-inference #arxiv-1908.05739 #region-us \n", "### Dataset Summary\n\n\nART consists of over 20k commonsense narrative contexts and 200k explanations.\n\n\nThe Abductive Natural Language Inference Dataset from AI2.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### anli\n\n\n* Size of downloaded dataset files: 5.12 MB\n* Size of the generated dataset: 34.36 MB\n* Total amount of disk used: 39.48 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### anli\n\n\n* 'observation\\_1': a 'string' feature.\n* 'observation\\_2': a 'string' feature.\n* 'hypothesis\\_1': a 'string' feature.\n* 'hypothesis\\_2': a 'string' feature.\n* 'label': a classification label, with possible values including '0' (0), '1' (1), '2' (2).", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @patrickvonplaten, @thomwolf, @mariamabarham, @lewtun, @lhoestq for adding this dataset." ]
[ 122, 39, 10, 11, 6, 51, 17, 95, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 39 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-text-classification #task_ids-natural-language-inference #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-unknown #abductive-natural-language-inference #arxiv-1908.05739 #region-us \n### Dataset Summary\n\n\nART consists of over 20k commonsense narrative contexts and 200k explanations.\n\n\nThe Abductive Natural Language Inference Dataset from AI2.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### anli\n\n\n* Size of downloaded dataset files: 5.12 MB\n* Size of the generated dataset: 34.36 MB\n* Total amount of disk used: 39.48 MB\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### anli\n\n\n* 'observation\\_1': a 'string' feature.\n* 'observation\\_2': a 'string' feature.\n* 'hypothesis\\_1': a 'string' feature.\n* 'hypothesis\\_2': a 'string' feature.\n* 'label': a classification label, with possible values including '0' (0), '1' (1), '2' (2).### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators### Licensing Information" ]
c70944cb158dcdab8a5403b1fa20f28119f701a6
# Dataset Card for arXiv Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Kaggle arXiv Dataset Homepage](https://www.kaggle.com/Cornell-University/arxiv) - **Repository:** - **Paper:** [On the Use of ArXiv as a Dataset](https://arxiv.org/abs/1905.00075) - **Leaderboard:** - **Point of Contact:** [Matt Bierbaum](mailto:matt.bierbaum@gmail.com) ### Dataset Summary A dataset of 1.7 million arXiv articles for applications like trend analysis, paper recommender engines, category prediction, co-citation networks, knowledge graph construction and semantic search interfaces. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The language supported is English ## Dataset Structure ### Data Instances This dataset is a mirror of the original ArXiv data. Because the full dataset is rather large (1.1TB and growing), this dataset provides only a metadata file in the json format. An example is given below ``` {'id': '0704.0002', 'submitter': 'Louis Theran', 'authors': 'Ileana Streinu and Louis Theran', 'title': 'Sparsity-certifying Graph Decompositions', 'comments': 'To appear in Graphs and Combinatorics', 'journal-ref': None, 'doi': None, 'report-no': None, 'categories': 'math.CO cs.CG', 'license': 'http://arxiv.org/licenses/nonexclusive-distrib/1.0/', 'abstract': ' We describe a new algorithm, the $(k,\\ell)$-pebble game with colors, and use\nit obtain a characterization of the family of $(k,\\ell)$-sparse graphs and\nalgorithmic solutions to a family of problems concerning tree decompositions of\ngraphs. Special instances of sparse graphs appear in rigidity theory and have\nreceived increased attention in recent years. In particular, our colored\npebbles generalize and strengthen the previous results of Lee and Streinu and\ngive a new proof of the Tutte-Nash-Williams characterization of arboricity. We\nalso present a new decomposition that certifies sparsity based on the\n$(k,\\ell)$-pebble game with colors. Our work also exposes connections between\npebble game algorithms and previous sparse graph algorithms by Gabow, Gabow and\nWestermann and Hendrickson.\n', 'update_date': '2008-12-13'} ``` ### Data Fields - `id`: ArXiv ID (can be used to access the paper) - `submitter`: Who submitted the paper - `authors`: Authors of the paper - `title`: Title of the paper - `comments`: Additional info, such as number of pages and figures - `journal-ref`: Information about the journal the paper was published in - `doi`: [Digital Object Identifier](https://www.doi.org) - `report-no`: Report Number - `abstract`: The abstract of the paper - `categories`: Categories / tags in the ArXiv system ### Data Splits The data was not splited. ## Dataset Creation ### Curation Rationale For nearly 30 years, ArXiv has served the public and research communities by providing open access to scholarly articles, from the vast branches of physics to the many subdisciplines of computer science to everything in between, including math, statistics, electrical engineering, quantitative biology, and economics. This rich corpus of information offers significant, but sometimes overwhelming depth. In these times of unique global challenges, efficient extraction of insights from data is essential. To help make the arXiv more accessible, a free, open pipeline on Kaggle to the machine-readable arXiv dataset: a repository of 1.7 million articles, with relevant features such as article titles, authors, categories, abstracts, full text PDFs, and more is presented to empower new use cases that can lead to the exploration of richer machine learning techniques that combine multi-modal features towards applications like trend analysis, paper recommender engines, category prediction, co-citation networks, knowledge graph construction and semantic search interfaces. ### Source Data This data is based on arXiv papers. [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations This dataset contains no annotations. #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The original data is maintained by [ArXiv](https://arxiv.org/) ### Licensing Information The data is under the [Creative Commons CC0 1.0 Universal Public Domain Dedication](https://creativecommons.org/publicdomain/zero/1.0/) ### Citation Information ``` @misc{clement2019arxiv, title={On the Use of ArXiv as a Dataset}, author={Colin B. Clement and Matthew Bierbaum and Kevin P. O'Keeffe and Alexander A. Alemi}, year={2019}, eprint={1905.00075}, archivePrefix={arXiv}, primaryClass={cs.IR} } ``` ### Contributions Thanks to [@tanmoyio](https://github.com/tanmoyio) for adding this dataset.
arxiv_dataset
[ "task_categories:translation", "task_categories:summarization", "task_categories:text-retrieval", "task_ids:document-retrieval", "task_ids:entity-linking-retrieval", "task_ids:explanation-generation", "task_ids:fact-checking-retrieval", "task_ids:text-simplification", "annotations_creators:no-annotation", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:cc0-1.0", "arxiv:1905.00075", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["cc0-1.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["translation", "summarization", "text-retrieval"], "task_ids": ["document-retrieval", "entity-linking-retrieval", "explanation-generation", "fact-checking-retrieval", "text-simplification"], "pretty_name": "arXiv Dataset", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "submitter", "dtype": "string"}, {"name": "authors", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "comments", "dtype": "string"}, {"name": "journal-ref", "dtype": "string"}, {"name": "doi", "dtype": "string"}, {"name": "report-no", "dtype": "string"}, {"name": "categories", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "abstract", "dtype": "string"}, {"name": "update_date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3056873071, "num_examples": 2349354}], "download_size": 0, "dataset_size": 3056873071}}
2024-01-18T11:01:52+00:00
[ "1905.00075" ]
[ "en" ]
TAGS #task_categories-translation #task_categories-summarization #task_categories-text-retrieval #task_ids-document-retrieval #task_ids-entity-linking-retrieval #task_ids-explanation-generation #task_ids-fact-checking-retrieval #task_ids-text-simplification #annotations_creators-no-annotation #language_creators-expert-generated #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc0-1.0 #arxiv-1905.00075 #region-us
# Dataset Card for arXiv Dataset ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: Kaggle arXiv Dataset Homepage - Repository: - Paper: On the Use of ArXiv as a Dataset - Leaderboard: - Point of Contact: Matt Bierbaum ### Dataset Summary A dataset of 1.7 million arXiv articles for applications like trend analysis, paper recommender engines, category prediction, co-citation networks, knowledge graph construction and semantic search interfaces. ### Supported Tasks and Leaderboards ### Languages The language supported is English ## Dataset Structure ### Data Instances This dataset is a mirror of the original ArXiv data. Because the full dataset is rather large (1.1TB and growing), this dataset provides only a metadata file in the json format. An example is given below ### Data Fields - 'id': ArXiv ID (can be used to access the paper) - 'submitter': Who submitted the paper - 'authors': Authors of the paper - 'title': Title of the paper - 'comments': Additional info, such as number of pages and figures - 'journal-ref': Information about the journal the paper was published in - 'doi': Digital Object Identifier - 'report-no': Report Number - 'abstract': The abstract of the paper - 'categories': Categories / tags in the ArXiv system ### Data Splits The data was not splited. ## Dataset Creation ### Curation Rationale For nearly 30 years, ArXiv has served the public and research communities by providing open access to scholarly articles, from the vast branches of physics to the many subdisciplines of computer science to everything in between, including math, statistics, electrical engineering, quantitative biology, and economics. This rich corpus of information offers significant, but sometimes overwhelming depth. In these times of unique global challenges, efficient extraction of insights from data is essential. To help make the arXiv more accessible, a free, open pipeline on Kaggle to the machine-readable arXiv dataset: a repository of 1.7 million articles, with relevant features such as article titles, authors, categories, abstracts, full text PDFs, and more is presented to empower new use cases that can lead to the exploration of richer machine learning techniques that combine multi-modal features towards applications like trend analysis, paper recommender engines, category prediction, co-citation networks, knowledge graph construction and semantic search interfaces. ### Source Data This data is based on arXiv papers. #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations This dataset contains no annotations. #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators The original data is maintained by ArXiv ### Licensing Information The data is under the Creative Commons CC0 1.0 Universal Public Domain Dedication ### Contributions Thanks to @tanmoyio for adding this dataset.
[ "# Dataset Card for arXiv Dataset", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Kaggle arXiv Dataset Homepage\n- Repository: \n- Paper: On the Use of ArXiv as a Dataset\n- Leaderboard: \n- Point of Contact: Matt Bierbaum", "### Dataset Summary\n\nA dataset of 1.7 million arXiv articles for applications like trend analysis, paper recommender engines, category prediction, co-citation networks, knowledge graph construction and semantic search interfaces.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe language supported is English", "## Dataset Structure", "### Data Instances\n\nThis dataset is a mirror of the original ArXiv data. Because the full dataset is rather large (1.1TB and growing), this dataset provides only a metadata file in the json format. An example is given below", "### Data Fields\n\n- 'id': ArXiv ID (can be used to access the paper)\n- 'submitter': Who submitted the paper\n- 'authors': Authors of the paper\n- 'title': Title of the paper\n- 'comments': Additional info, such as number of pages and figures\n- 'journal-ref': Information about the journal the paper was published in\n- 'doi': Digital Object Identifier\n- 'report-no': Report Number\n- 'abstract': The abstract of the paper\n- 'categories': Categories / tags in the ArXiv system", "### Data Splits\n\nThe data was not splited.", "## Dataset Creation", "### Curation Rationale\n\nFor nearly 30 years, ArXiv has served the public and research communities by providing open access to scholarly articles, from the vast branches of physics to the many subdisciplines of computer science to everything in between, including math, statistics, electrical engineering, quantitative biology, and economics. This rich corpus of information offers significant, but sometimes overwhelming depth. In these times of unique global challenges, efficient extraction of insights from data is essential. To help make the arXiv more accessible, a free, open pipeline on Kaggle to the machine-readable arXiv dataset: a repository of 1.7 million articles, with relevant features such as article titles, authors, categories, abstracts, full text PDFs, and more is presented to empower new use cases that can lead to the exploration of richer machine learning techniques that combine multi-modal features towards applications like trend analysis, paper recommender engines, category prediction, co-citation networks, knowledge graph construction and semantic search interfaces.", "### Source Data\n\nThis data is based on arXiv papers.", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations\n\nThis dataset contains no annotations.", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThe original data is maintained by ArXiv", "### Licensing Information\n\nThe data is under the Creative Commons CC0 1.0 Universal Public Domain Dedication", "### Contributions\n\nThanks to @tanmoyio for adding this dataset." ]
[ "TAGS\n#task_categories-translation #task_categories-summarization #task_categories-text-retrieval #task_ids-document-retrieval #task_ids-entity-linking-retrieval #task_ids-explanation-generation #task_ids-fact-checking-retrieval #task_ids-text-simplification #annotations_creators-no-annotation #language_creators-expert-generated #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc0-1.0 #arxiv-1905.00075 #region-us \n", "# Dataset Card for arXiv Dataset", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Kaggle arXiv Dataset Homepage\n- Repository: \n- Paper: On the Use of ArXiv as a Dataset\n- Leaderboard: \n- Point of Contact: Matt Bierbaum", "### Dataset Summary\n\nA dataset of 1.7 million arXiv articles for applications like trend analysis, paper recommender engines, category prediction, co-citation networks, knowledge graph construction and semantic search interfaces.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe language supported is English", "## Dataset Structure", "### Data Instances\n\nThis dataset is a mirror of the original ArXiv data. Because the full dataset is rather large (1.1TB and growing), this dataset provides only a metadata file in the json format. An example is given below", "### Data Fields\n\n- 'id': ArXiv ID (can be used to access the paper)\n- 'submitter': Who submitted the paper\n- 'authors': Authors of the paper\n- 'title': Title of the paper\n- 'comments': Additional info, such as number of pages and figures\n- 'journal-ref': Information about the journal the paper was published in\n- 'doi': Digital Object Identifier\n- 'report-no': Report Number\n- 'abstract': The abstract of the paper\n- 'categories': Categories / tags in the ArXiv system", "### Data Splits\n\nThe data was not splited.", "## Dataset Creation", "### Curation Rationale\n\nFor nearly 30 years, ArXiv has served the public and research communities by providing open access to scholarly articles, from the vast branches of physics to the many subdisciplines of computer science to everything in between, including math, statistics, electrical engineering, quantitative biology, and economics. This rich corpus of information offers significant, but sometimes overwhelming depth. In these times of unique global challenges, efficient extraction of insights from data is essential. To help make the arXiv more accessible, a free, open pipeline on Kaggle to the machine-readable arXiv dataset: a repository of 1.7 million articles, with relevant features such as article titles, authors, categories, abstracts, full text PDFs, and more is presented to empower new use cases that can lead to the exploration of richer machine learning techniques that combine multi-modal features towards applications like trend analysis, paper recommender engines, category prediction, co-citation networks, knowledge graph construction and semantic search interfaces.", "### Source Data\n\nThis data is based on arXiv papers.", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations\n\nThis dataset contains no annotations.", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThe original data is maintained by ArXiv", "### Licensing Information\n\nThe data is under the Creative Commons CC0 1.0 Universal Public Domain Dedication", "### Contributions\n\nThanks to @tanmoyio for adding this dataset." ]
[ 172, 10, 120, 47, 50, 10, 10, 6, 54, 136, 12, 5, 238, 15, 10, 10, 15, 5, 9, 8, 8, 7, 8, 7, 5, 16, 22, 18 ]
[ "passage: TAGS\n#task_categories-translation #task_categories-summarization #task_categories-text-retrieval #task_ids-document-retrieval #task_ids-entity-linking-retrieval #task_ids-explanation-generation #task_ids-fact-checking-retrieval #task_ids-text-simplification #annotations_creators-no-annotation #language_creators-expert-generated #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc0-1.0 #arxiv-1905.00075 #region-us \n# Dataset Card for arXiv Dataset## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: Kaggle arXiv Dataset Homepage\n- Repository: \n- Paper: On the Use of ArXiv as a Dataset\n- Leaderboard: \n- Point of Contact: Matt Bierbaum### Dataset Summary\n\nA dataset of 1.7 million arXiv articles for applications like trend analysis, paper recommender engines, category prediction, co-citation networks, knowledge graph construction and semantic search interfaces.### Supported Tasks and Leaderboards### Languages\n\nThe language supported is English## Dataset Structure### Data Instances\n\nThis dataset is a mirror of the original ArXiv data. Because the full dataset is rather large (1.1TB and growing), this dataset provides only a metadata file in the json format. An example is given below" ]
9157196d77890cf20b57075353813b34dba3426e
# Dataset Card for Ascent KB ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://ascent.mpi-inf.mpg.de/ - **Repository:** https://github.com/phongnt570/ascent - **Paper:** https://arxiv.org/abs/2011.00905 - **Point of Contact:** http://tuan-phong.com ### Dataset Summary This dataset contains 8.9M commonsense assertions extracted by the Ascent pipeline developed at the [Max Planck Institute for Informatics](https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/). The focus of this dataset is on everyday concepts such as *elephant*, *car*, *laptop*, etc. The current version of Ascent KB (v1.0.0) is approximately **19 times larger than ConceptNet** (note that, in this comparison, non-commonsense knowledge in ConceptNet such as lexical relations is excluded). For more details, take a look at [the research paper](https://arxiv.org/abs/2011.00905) and [the website](https://ascent.mpi-inf.mpg.de). ### Supported Tasks and Leaderboards The dataset can be used in a wide range of downstream tasks such as commonsense question answering or dialogue systems. ### Languages The dataset is in English. ## Dataset Structure ### Data Instances There are two configurations available for this dataset: 1. `canonical` (default): This part contains `<arg1 ; rel ; arg2>` assertions where the relations (`rel`) were mapped to [ConceptNet relations](https://github.com/commonsense/conceptnet5/wiki/Relations) with slight modifications: - Introducing 2 new relations: `/r/HasSubgroup`, `/r/HasAspect`. - All `/r/HasA` relations were replaced with `/r/HasAspect`. This is motivated by the [ATOMIC-2020](https://allenai.org/data/atomic-2020) schema, although they grouped all `/r/HasA` and `/r/HasProperty` into `/r/HasProperty`. - The `/r/UsedFor` relation was replaced with `/r/ObjectUse` which is broader (could be either _"used for"_, _"used in"_, or _"used as"_, ect.). This is also taken from ATOMIC-2020. 2. `open`: This part contains open assertions of the form `<subject ; predicate ; object>` extracted directly from web contents. This is the original form of the `canonical` triples. In both configurations, each assertion is equipped with extra information including: a set of semantic `facets` (e.g., *LOCATION*, *TEMPORAL*, etc.), its `support` (i.e., number of occurrences), and a list of `source_sentences`. An example row in the `canonical` configuration: ```JSON { "arg1": "elephant", "rel": "/r/HasProperty", "arg2": "intelligent", "support": 15, "facets": [ { "value": "extremely", "type": "DEGREE", "support": 11 } ], "source_sentences": [ { "text": "Elephants are extremely intelligent animals.", "source": "https://www.softschools.com/facts/animals/asian_elephant_facts/2310/" }, { "text": "Elephants are extremely intelligent creatures and an elephant's brain can weigh as much as 4-6 kg.", "source": "https://www.elephantsforafrica.org/elephant-facts/" } ] } ``` ### Data Fields - **For `canonical` configuration** - `arg1`: the first argument to the relationship, e.g., *elephant* - `rel`: the canonical relation, e.g., */r/HasProperty* - `arg2`: the second argument to the relationship, e.g., *intelligence* - `support`: the number of occurrences of the assertion, e.g., *15* - `facets`: an array of semantic facets, each contains - `value`: facet value, e.g., *extremely* - `type`: facet type, e.g., *DEGREE* - `support`: the number of occurrences of the facet, e.g., *11* - `source_sentences`: an array of source sentences from which the assertion was extracted, each contains - `text`: the raw text of the sentence - `source`: the URL to its parent document - **For `open` configuration** - The fields of this configuration are the same as the `canonical` configuration's, except that the (`arg1`, `rel`, `arg2`) fields are replaced with the (`subject`, `predicate`, `object`) fields which are free text phrases extracted directly from the source sentences using an Open Information Extraction (OpenIE) tool. ### Data Splits There are no splits. All data points come to a default split called `train`. ## Dataset Creation ### Curation Rationale The commonsense knowledge base was created to assist in development of robust and reliable AI. ### Source Data #### Initial Data Collection and Normalization Texts were collected from the web using the Bing Search API, and went through various cleaning steps before being processed by an OpenIE tool to get open assertions. The assertions were then grouped into semantically equivalent clusters. Take a look at the research paper for more details. #### Who are the source language producers? Web users. ### Annotations #### Annotation process None. #### Who are the annotators? None. ### Personal and Sensitive Information Unknown. ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators The knowledge base has been developed by researchers at the [Max Planck Institute for Informatics](https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/). Contact [Tuan-Phong Nguyen](http://tuan-phong.com) in case of questions and comments. ### Licensing Information [The Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/) ### Citation Information ``` @InProceedings{nguyen2021www, title={Advanced Semantics for Commonsense Knowledge Extraction}, author={Nguyen, Tuan-Phong and Razniewski, Simon and Weikum, Gerhard}, year={2021}, booktitle={The Web Conference 2021}, } ``` ### Contributions Thanks to [@phongnt570](https://github.com/phongnt570) for adding this dataset.
ascent_kb
[ "task_categories:other", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:cc-by-4.0", "knowledge-base", "arxiv:2011.00905", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["other"], "task_ids": [], "paperswithcode_id": "ascentkb", "pretty_name": "Ascent KB", "tags": ["knowledge-base"], "dataset_info": [{"config_name": "canonical", "features": [{"name": "arg1", "dtype": "string"}, {"name": "rel", "dtype": "string"}, {"name": "arg2", "dtype": "string"}, {"name": "support", "dtype": "int64"}, {"name": "facets", "list": [{"name": "value", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "support", "dtype": "int64"}]}, {"name": "source_sentences", "list": [{"name": "text", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2976665740, "num_examples": 8904060}], "download_size": 898478552, "dataset_size": 2976665740}, {"config_name": "open", "features": [{"name": "subject", "dtype": "string"}, {"name": "predicate", "dtype": "string"}, {"name": "object", "dtype": "string"}, {"name": "support", "dtype": "int64"}, {"name": "facets", "list": [{"name": "value", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "support", "dtype": "int64"}]}, {"name": "source_sentences", "list": [{"name": "text", "dtype": "string"}, {"name": "source", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2882646222, "num_examples": 8904060}], "download_size": 900156754, "dataset_size": 2882646222}], "configs": [{"config_name": "canonical", "data_files": [{"split": "train", "path": "canonical/train-*"}], "default": true}, {"config_name": "open", "data_files": [{"split": "train", "path": "open/train-*"}]}]}
2024-01-09T14:44:26+00:00
[ "2011.00905" ]
[ "en" ]
TAGS #task_categories-other #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #knowledge-base #arxiv-2011.00905 #region-us
# Dataset Card for Ascent KB ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL - Point of Contact: URL ### Dataset Summary This dataset contains 8.9M commonsense assertions extracted by the Ascent pipeline developed at the Max Planck Institute for Informatics. The focus of this dataset is on everyday concepts such as *elephant*, *car*, *laptop*, etc. The current version of Ascent KB (v1.0.0) is approximately 19 times larger than ConceptNet (note that, in this comparison, non-commonsense knowledge in ConceptNet such as lexical relations is excluded). For more details, take a look at the research paper and the website. ### Supported Tasks and Leaderboards The dataset can be used in a wide range of downstream tasks such as commonsense question answering or dialogue systems. ### Languages The dataset is in English. ## Dataset Structure ### Data Instances There are two configurations available for this dataset: 1. 'canonical' (default): This part contains '<arg1 ; rel ; arg2>' assertions where the relations ('rel') were mapped to ConceptNet relations with slight modifications: - Introducing 2 new relations: '/r/HasSubgroup', '/r/HasAspect'. - All '/r/HasA' relations were replaced with '/r/HasAspect'. This is motivated by the ATOMIC-2020 schema, although they grouped all '/r/HasA' and '/r/HasProperty' into '/r/HasProperty'. - The '/r/UsedFor' relation was replaced with '/r/ObjectUse' which is broader (could be either _"used for"_, _"used in"_, or _"used as"_, ect.). This is also taken from ATOMIC-2020. 2. 'open': This part contains open assertions of the form '<subject ; predicate ; object>' extracted directly from web contents. This is the original form of the 'canonical' triples. In both configurations, each assertion is equipped with extra information including: a set of semantic 'facets' (e.g., *LOCATION*, *TEMPORAL*, etc.), its 'support' (i.e., number of occurrences), and a list of 'source_sentences'. An example row in the 'canonical' configuration: ### Data Fields - For 'canonical' configuration - 'arg1': the first argument to the relationship, e.g., *elephant* - 'rel': the canonical relation, e.g., */r/HasProperty* - 'arg2': the second argument to the relationship, e.g., *intelligence* - 'support': the number of occurrences of the assertion, e.g., *15* - 'facets': an array of semantic facets, each contains - 'value': facet value, e.g., *extremely* - 'type': facet type, e.g., *DEGREE* - 'support': the number of occurrences of the facet, e.g., *11* - 'source_sentences': an array of source sentences from which the assertion was extracted, each contains - 'text': the raw text of the sentence - 'source': the URL to its parent document - For 'open' configuration - The fields of this configuration are the same as the 'canonical' configuration's, except that the ('arg1', 'rel', 'arg2') fields are replaced with the ('subject', 'predicate', 'object') fields which are free text phrases extracted directly from the source sentences using an Open Information Extraction (OpenIE) tool. ### Data Splits There are no splits. All data points come to a default split called 'train'. ## Dataset Creation ### Curation Rationale The commonsense knowledge base was created to assist in development of robust and reliable AI. ### Source Data #### Initial Data Collection and Normalization Texts were collected from the web using the Bing Search API, and went through various cleaning steps before being processed by an OpenIE tool to get open assertions. The assertions were then grouped into semantically equivalent clusters. Take a look at the research paper for more details. #### Who are the source language producers? Web users. ### Annotations #### Annotation process None. #### Who are the annotators? None. ### Personal and Sensitive Information Unknown. ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators The knowledge base has been developed by researchers at the Max Planck Institute for Informatics. Contact Tuan-Phong Nguyen in case of questions and comments. ### Licensing Information The Creative Commons Attribution 4.0 International License ### Contributions Thanks to @phongnt570 for adding this dataset.
[ "# Dataset Card for Ascent KB", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Point of Contact: URL", "### Dataset Summary\n\nThis dataset contains 8.9M commonsense assertions extracted by the Ascent pipeline developed at the Max Planck Institute for Informatics.\nThe focus of this dataset is on everyday concepts such as *elephant*, *car*, *laptop*, etc.\nThe current version of Ascent KB (v1.0.0) is approximately 19 times larger than ConceptNet (note that, in this comparison, non-commonsense knowledge in ConceptNet such as lexical relations is excluded).\n\nFor more details, take a look at\nthe research paper and\nthe website.", "### Supported Tasks and Leaderboards\n\nThe dataset can be used in a wide range of downstream tasks such as commonsense question answering or dialogue systems.", "### Languages\n\nThe dataset is in English.", "## Dataset Structure", "### Data Instances\nThere are two configurations available for this dataset:\n1. 'canonical' (default): This part contains '<arg1 ; rel ; arg2>'\n assertions where the relations ('rel') were mapped to \n ConceptNet relations\n with slight modifications:\n - Introducing 2 new relations: '/r/HasSubgroup', '/r/HasAspect'.\n - All '/r/HasA' relations were replaced with '/r/HasAspect'. \n This is motivated by the ATOMIC-2020\n schema, although they grouped all '/r/HasA' and\n '/r/HasProperty' into '/r/HasProperty'.\n - The '/r/UsedFor' relation was replaced with '/r/ObjectUse'\n which is broader (could be either _\"used for\"_, _\"used in\"_, or _\"used as\"_, ect.).\n This is also taken from ATOMIC-2020.\n2. 'open': This part contains open assertions of the form\n '<subject ; predicate ; object>' extracted directly from web\n contents. This is the original form of the 'canonical' triples. \n\nIn both configurations, each assertion is equipped with \nextra information including: a set of semantic 'facets'\n(e.g., *LOCATION*, *TEMPORAL*, etc.), its 'support' (i.e., number of occurrences),\nand a list of 'source_sentences'.\n\nAn example row in the 'canonical' configuration:", "### Data Fields\n\n- For 'canonical' configuration\n - 'arg1': the first argument to the relationship, e.g., *elephant*\n - 'rel': the canonical relation, e.g., */r/HasProperty*\n - 'arg2': the second argument to the relationship, e.g., *intelligence*\n - 'support': the number of occurrences of the assertion, e.g., *15*\n - 'facets': an array of semantic facets, each contains\n - 'value': facet value, e.g., *extremely*\n - 'type': facet type, e.g., *DEGREE*\n - 'support': the number of occurrences of the facet, e.g., *11*\n - 'source_sentences': an array of source sentences from which the assertion was\n extracted, each contains\n - 'text': the raw text of the sentence\n - 'source': the URL to its parent document\n\n- For 'open' configuration\n - The fields of this configuration are the same as the 'canonical'\n configuration's, except that\n the ('arg1', 'rel', 'arg2') fields are replaced with the\n ('subject', 'predicate', 'object') fields\n which are free\n text phrases extracted directly from the source sentences\n using an Open Information Extraction (OpenIE) tool.", "### Data Splits\n\nThere are no splits. All data points come to a default split called 'train'.", "## Dataset Creation", "### Curation Rationale\n\nThe commonsense knowledge base was created to assist in development of robust and reliable AI.", "### Source Data", "#### Initial Data Collection and Normalization\n\nTexts were collected from the web using the Bing Search API, and went through various cleaning steps before being processed by an OpenIE tool to get open assertions.\nThe assertions were then grouped into semantically equivalent clusters.\nTake a look at the research paper for more details.", "#### Who are the source language producers?\n\nWeb users.", "### Annotations", "#### Annotation process\n\nNone.", "#### Who are the annotators?\n\nNone.", "### Personal and Sensitive Information\n\nUnknown.", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThe knowledge base has been developed by researchers at the\nMax Planck Institute for Informatics.\n\nContact Tuan-Phong Nguyen in case of questions and comments.", "### Licensing Information\n\nThe Creative Commons Attribution 4.0 International License", "### Contributions\n\nThanks to @phongnt570 for adding this dataset." ]
[ "TAGS\n#task_categories-other #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #knowledge-base #arxiv-2011.00905 #region-us \n", "# Dataset Card for Ascent KB", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Point of Contact: URL", "### Dataset Summary\n\nThis dataset contains 8.9M commonsense assertions extracted by the Ascent pipeline developed at the Max Planck Institute for Informatics.\nThe focus of this dataset is on everyday concepts such as *elephant*, *car*, *laptop*, etc.\nThe current version of Ascent KB (v1.0.0) is approximately 19 times larger than ConceptNet (note that, in this comparison, non-commonsense knowledge in ConceptNet such as lexical relations is excluded).\n\nFor more details, take a look at\nthe research paper and\nthe website.", "### Supported Tasks and Leaderboards\n\nThe dataset can be used in a wide range of downstream tasks such as commonsense question answering or dialogue systems.", "### Languages\n\nThe dataset is in English.", "## Dataset Structure", "### Data Instances\nThere are two configurations available for this dataset:\n1. 'canonical' (default): This part contains '<arg1 ; rel ; arg2>'\n assertions where the relations ('rel') were mapped to \n ConceptNet relations\n with slight modifications:\n - Introducing 2 new relations: '/r/HasSubgroup', '/r/HasAspect'.\n - All '/r/HasA' relations were replaced with '/r/HasAspect'. \n This is motivated by the ATOMIC-2020\n schema, although they grouped all '/r/HasA' and\n '/r/HasProperty' into '/r/HasProperty'.\n - The '/r/UsedFor' relation was replaced with '/r/ObjectUse'\n which is broader (could be either _\"used for\"_, _\"used in\"_, or _\"used as\"_, ect.).\n This is also taken from ATOMIC-2020.\n2. 'open': This part contains open assertions of the form\n '<subject ; predicate ; object>' extracted directly from web\n contents. This is the original form of the 'canonical' triples. \n\nIn both configurations, each assertion is equipped with \nextra information including: a set of semantic 'facets'\n(e.g., *LOCATION*, *TEMPORAL*, etc.), its 'support' (i.e., number of occurrences),\nand a list of 'source_sentences'.\n\nAn example row in the 'canonical' configuration:", "### Data Fields\n\n- For 'canonical' configuration\n - 'arg1': the first argument to the relationship, e.g., *elephant*\n - 'rel': the canonical relation, e.g., */r/HasProperty*\n - 'arg2': the second argument to the relationship, e.g., *intelligence*\n - 'support': the number of occurrences of the assertion, e.g., *15*\n - 'facets': an array of semantic facets, each contains\n - 'value': facet value, e.g., *extremely*\n - 'type': facet type, e.g., *DEGREE*\n - 'support': the number of occurrences of the facet, e.g., *11*\n - 'source_sentences': an array of source sentences from which the assertion was\n extracted, each contains\n - 'text': the raw text of the sentence\n - 'source': the URL to its parent document\n\n- For 'open' configuration\n - The fields of this configuration are the same as the 'canonical'\n configuration's, except that\n the ('arg1', 'rel', 'arg2') fields are replaced with the\n ('subject', 'predicate', 'object') fields\n which are free\n text phrases extracted directly from the source sentences\n using an Open Information Extraction (OpenIE) tool.", "### Data Splits\n\nThere are no splits. All data points come to a default split called 'train'.", "## Dataset Creation", "### Curation Rationale\n\nThe commonsense knowledge base was created to assist in development of robust and reliable AI.", "### Source Data", "#### Initial Data Collection and Normalization\n\nTexts were collected from the web using the Bing Search API, and went through various cleaning steps before being processed by an OpenIE tool to get open assertions.\nThe assertions were then grouped into semantically equivalent clusters.\nTake a look at the research paper for more details.", "#### Who are the source language producers?\n\nWeb users.", "### Annotations", "#### Annotation process\n\nNone.", "#### Who are the annotators?\n\nNone.", "### Personal and Sensitive Information\n\nUnknown.", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThe knowledge base has been developed by researchers at the\nMax Planck Institute for Informatics.\n\nContact Tuan-Phong Nguyen in case of questions and comments.", "### Licensing Information\n\nThe Creative Commons Attribution 4.0 International License", "### Contributions\n\nThanks to @phongnt570 for adding this dataset." ]
[ 86, 8, 120, 24, 129, 36, 11, 6, 374, 333, 25, 5, 24, 4, 73, 13, 5, 8, 12, 12, 8, 7, 8, 7, 5, 38, 13, 19 ]
[ "passage: TAGS\n#task_categories-other #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #knowledge-base #arxiv-2011.00905 #region-us \n# Dataset Card for Ascent KB## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Point of Contact: URL### Dataset Summary\n\nThis dataset contains 8.9M commonsense assertions extracted by the Ascent pipeline developed at the Max Planck Institute for Informatics.\nThe focus of this dataset is on everyday concepts such as *elephant*, *car*, *laptop*, etc.\nThe current version of Ascent KB (v1.0.0) is approximately 19 times larger than ConceptNet (note that, in this comparison, non-commonsense knowledge in ConceptNet such as lexical relations is excluded).\n\nFor more details, take a look at\nthe research paper and\nthe website.### Supported Tasks and Leaderboards\n\nThe dataset can be used in a wide range of downstream tasks such as commonsense question answering or dialogue systems.### Languages\n\nThe dataset is in English.## Dataset Structure", "passage: ### Data Instances\nThere are two configurations available for this dataset:\n1. 'canonical' (default): This part contains '<arg1 ; rel ; arg2>'\n assertions where the relations ('rel') were mapped to \n ConceptNet relations\n with slight modifications:\n - Introducing 2 new relations: '/r/HasSubgroup', '/r/HasAspect'.\n - All '/r/HasA' relations were replaced with '/r/HasAspect'. \n This is motivated by the ATOMIC-2020\n schema, although they grouped all '/r/HasA' and\n '/r/HasProperty' into '/r/HasProperty'.\n - The '/r/UsedFor' relation was replaced with '/r/ObjectUse'\n which is broader (could be either _\"used for\"_, _\"used in\"_, or _\"used as\"_, ect.).\n This is also taken from ATOMIC-2020.\n2. 'open': This part contains open assertions of the form\n '<subject ; predicate ; object>' extracted directly from web\n contents. This is the original form of the 'canonical' triples. \n\nIn both configurations, each assertion is equipped with \nextra information including: a set of semantic 'facets'\n(e.g., *LOCATION*, *TEMPORAL*, etc.), its 'support' (i.e., number of occurrences),\nand a list of 'source_sentences'.\n\nAn example row in the 'canonical' configuration:### Data Fields\n\n- For 'canonical' configuration\n - 'arg1': the first argument to the relationship, e.g., *elephant*\n - 'rel': the canonical relation, e.g., */r/HasProperty*\n - 'arg2': the second argument to the relationship, e.g., *intelligence*\n - 'support': the number of occurrences of the assertion, e.g., *15*\n - 'facets': an array of semantic facets, each contains\n - 'value': facet value, e.g., *extremely*\n - 'type': facet type, e.g., *DEGREE*\n - 'support': the number of occurrences of the facet, e.g., *11*\n - 'source_sentences': an array of source sentences from which the assertion was\n extracted, each contains\n - 'text': the raw text of the sentence\n - 'source': the URL to its parent document\n\n- For 'open' configuration\n - The fields of this configuration are the same as the 'canonical'\n configuration's, except that\n the ('arg1', 'rel', 'arg2') fields are replaced with the\n ('subject', 'predicate', 'object') fields\n which are free\n text phrases extracted directly from the source sentences\n using an Open Information Extraction (OpenIE) tool.### Data Splits\n\nThere are no splits. All data points come to a default split called 'train'.## Dataset Creation### Curation Rationale\n\nThe commonsense knowledge base was created to assist in development of robust and reliable AI.### Source Data#### Initial Data Collection and Normalization\n\nTexts were collected from the web using the Bing Search API, and went through various cleaning steps before being processed by an OpenIE tool to get open assertions.\nThe assertions were then grouped into semantically equivalent clusters.\nTake a look at the research paper for more details.#### Who are the source language producers?\n\nWeb users.### Annotations#### Annotation process\n\nNone.#### Who are the annotators?\n\nNone." ]
cb7cd272db8fcd4004ee04ddf50e194c15ea24d6
# Dataset Card for "aslg_pc12" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://achrafothman.net/site/asl-smt/](https://achrafothman.net/site/asl-smt/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 12.77 MB - **Size of the generated dataset:** 13.50 MB - **Total amount of disk used:** 26.27 MB ### Dataset Summary Synthetic English-ASL Gloss Parallel Corpus 2012 ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 12.77 MB - **Size of the generated dataset:** 13.50 MB - **Total amount of disk used:** 26.27 MB An example of 'train' looks as follows. ``` { "gloss": "WRITE STATEMENT AND DESC-ORAL QUESTION TABLE SEE MINUTE\n", "text": "written statements and oral questions tabling see minutes\n" } ``` ### Data Fields The data fields are the same among all splits. #### default - `gloss`: a `string` feature. - `text`: a `string` feature. ### Data Splits | name |train| |-------|----:| |default|87710| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{othman2012english, title={English-asl gloss parallel corpus 2012: Aslg-pc12}, author={Othman, Achraf and Jemni, Mohamed}, booktitle={5th Workshop on the Representation and Processing of Sign Languages: Interactions between Corpus and Lexicon LREC}, year={2012} } ``` ### Contributions Thanks to [@AmitMY](https://github.com/AmitMY) for adding this dataset.
aslg_pc12
[ "task_categories:translation", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:translation", "size_categories:10K<n<100K", "source_datasets:original", "language:ase", "language:en", "license:cc-by-nc-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced", "expert-generated"], "language_creators": ["found"], "language": ["ase", "en"], "license": ["cc-by-nc-4.0"], "multilinguality": ["translation"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "paperswithcode_id": "aslg-pc12", "pretty_name": "English-ASL Gloss Parallel Corpus 2012", "dataset_info": {"features": [{"name": "gloss", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13475111, "num_examples": 87710}], "download_size": 7583458, "dataset_size": 13475111}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-09T12:45:54+00:00
[]
[ "ase", "en" ]
TAGS #task_categories-translation #annotations_creators-crowdsourced #annotations_creators-expert-generated #language_creators-found #multilinguality-translation #size_categories-10K<n<100K #source_datasets-original #language-American Sign Language #language-English #license-cc-by-nc-4.0 #region-us
Dataset Card for "aslg\_pc12" ============================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: * Point of Contact: * Size of downloaded dataset files: 12.77 MB * Size of the generated dataset: 13.50 MB * Total amount of disk used: 26.27 MB ### Dataset Summary Synthetic English-ASL Gloss Parallel Corpus 2012 ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### default * Size of downloaded dataset files: 12.77 MB * Size of the generated dataset: 13.50 MB * Total amount of disk used: 26.27 MB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### default * 'gloss': a 'string' feature. * 'text': a 'string' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @AmitMY for adding this dataset.
[ "### Dataset Summary\n\n\nSynthetic English-ASL Gloss Parallel Corpus 2012", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 12.77 MB\n* Size of the generated dataset: 13.50 MB\n* Total amount of disk used: 26.27 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'gloss': a 'string' feature.\n* 'text': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @AmitMY for adding this dataset." ]
[ "TAGS\n#task_categories-translation #annotations_creators-crowdsourced #annotations_creators-expert-generated #language_creators-found #multilinguality-translation #size_categories-10K<n<100K #source_datasets-original #language-American Sign Language #language-English #license-cc-by-nc-4.0 #region-us \n", "### Dataset Summary\n\n\nSynthetic English-ASL Gloss Parallel Corpus 2012", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 12.77 MB\n* Size of the generated dataset: 13.50 MB\n* Total amount of disk used: 26.27 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'gloss': a 'string' feature.\n* 'text': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @AmitMY for adding this dataset." ]
[ 97, 19, 10, 11, 6, 49, 17, 27, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 17 ]
[ "passage: TAGS\n#task_categories-translation #annotations_creators-crowdsourced #annotations_creators-expert-generated #language_creators-found #multilinguality-translation #size_categories-10K<n<100K #source_datasets-original #language-American Sign Language #language-English #license-cc-by-nc-4.0 #region-us \n### Dataset Summary\n\n\nSynthetic English-ASL Gloss Parallel Corpus 2012### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### default\n\n\n* Size of downloaded dataset files: 12.77 MB\n* Size of the generated dataset: 13.50 MB\n* Total amount of disk used: 26.27 MB\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### default\n\n\n* 'gloss': a 'string' feature.\n* 'text': a 'string' feature.### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators### Licensing Information### Contributions\n\n\nThanks to @AmitMY for adding this dataset." ]
32291fc9663b9ee88abb97114e52501bdd58a129
# Dataset Card for "asnq" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/alexa/wqa_tanda#answer-sentence-natural-questions-asnq](https://github.com/alexa/wqa_tanda#answer-sentence-natural-questions-asnq) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection](https://arxiv.org/abs/1911.04118) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 3.56 GB - **Size of the generated dataset:** 3.82 GB - **Total amount of disk used:** 7.39 GB ### Dataset Summary ASNQ is a dataset for answer sentence selection derived from Google's Natural Questions (NQ) dataset (Kwiatkowski et al. 2019). Each example contains a question, candidate sentence, label indicating whether or not the sentence answers the question, and two additional features -- sentence_in_long_answer and short_answer_in_sentence indicating whether ot not the candidate sentence is contained in the long_answer and if the short_answer is in the candidate sentence. For more details please see https://arxiv.org/abs/1911.04118 and https://research.google/pubs/pub47761/ ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 3.56 GB - **Size of the generated dataset:** 3.82 GB - **Total amount of disk used:** 7.39 GB An example of 'validation' looks as follows. ``` { "label": 0, "question": "when did somewhere over the rainbow come out", "sentence": "In films and TV shows ( edit ) In the film Third Finger , Left Hand ( 1940 ) with Myrna Loy , Melvyn Douglas , and Raymond Walburn , the tune played throughout the film in short sequences .", "sentence_in_long_answer": false, "short_answer_in_sentence": false } ``` ### Data Fields The data fields are the same among all splits. #### default - `question`: a `string` feature. - `sentence`: a `string` feature. - `label`: a classification label, with possible values including `neg` (0), `pos` (1). - `sentence_in_long_answer`: a `bool` feature. - `short_answer_in_sentence`: a `bool` feature. ### Data Splits | name | train |validation| |-------|-------:|---------:| |default|20377568| 930062| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information The data is made available under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License: https://github.com/alexa/wqa_tanda/blob/master/LICENSE ### Citation Information ``` @article{Garg_2020, title={TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection}, volume={34}, ISSN={2159-5399}, url={http://dx.doi.org/10.1609/AAAI.V34I05.6282}, DOI={10.1609/aaai.v34i05.6282}, number={05}, journal={Proceedings of the AAAI Conference on Artificial Intelligence}, publisher={Association for the Advancement of Artificial Intelligence (AAAI)}, author={Garg, Siddhant and Vu, Thuy and Moschitti, Alessandro}, year={2020}, month={Apr}, pages={7780–7788} } ``` ### Contributions Thanks to [@mkserge](https://github.com/mkserge) for adding this dataset.
asnq
[ "task_categories:multiple-choice", "task_ids:multiple-choice-qa", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10M<n<100M", "source_datasets:extended|natural_questions", "language:en", "license:cc-by-nc-sa-3.0", "arxiv:1911.04118", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-nc-sa-3.0"], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "source_datasets": ["extended|natural_questions"], "task_categories": ["multiple-choice"], "task_ids": ["multiple-choice-qa"], "paperswithcode_id": "asnq", "pretty_name": "Answer Sentence Natural Questions (ASNQ)", "dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "neg", "1": "pos"}}}}, {"name": "sentence_in_long_answer", "dtype": "bool"}, {"name": "short_answer_in_sentence", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 3656865072, "num_examples": 20377568}, {"name": "validation", "num_bytes": 168004403, "num_examples": 930062}], "download_size": 2496835395, "dataset_size": 3824869475}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}]}
2024-01-09T15:33:53+00:00
[ "1911.04118" ]
[ "en" ]
TAGS #task_categories-multiple-choice #task_ids-multiple-choice-qa #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-extended|natural_questions #language-English #license-cc-by-nc-sa-3.0 #arxiv-1911.04118 #region-us
Dataset Card for "asnq" ======================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection * Point of Contact: * Size of downloaded dataset files: 3.56 GB * Size of the generated dataset: 3.82 GB * Total amount of disk used: 7.39 GB ### Dataset Summary ASNQ is a dataset for answer sentence selection derived from Google's Natural Questions (NQ) dataset (Kwiatkowski et al. 2019). Each example contains a question, candidate sentence, label indicating whether or not the sentence answers the question, and two additional features -- sentence\_in\_long\_answer and short\_answer\_in\_sentence indicating whether ot not the candidate sentence is contained in the long\_answer and if the short\_answer is in the candidate sentence. For more details please see URL and URL ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### default * Size of downloaded dataset files: 3.56 GB * Size of the generated dataset: 3.82 GB * Total amount of disk used: 7.39 GB An example of 'validation' looks as follows. ### Data Fields The data fields are the same among all splits. #### default * 'question': a 'string' feature. * 'sentence': a 'string' feature. * 'label': a classification label, with possible values including 'neg' (0), 'pos' (1). * 'sentence\_in\_long\_answer': a 'bool' feature. * 'short\_answer\_in\_sentence': a 'bool' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information The data is made available under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License: URL ### Contributions Thanks to @mkserge for adding this dataset.
[ "### Dataset Summary\n\n\nASNQ is a dataset for answer sentence selection derived from\nGoogle's Natural Questions (NQ) dataset (Kwiatkowski et al. 2019).\n\n\nEach example contains a question, candidate sentence, label indicating whether or not\nthe sentence answers the question, and two additional features --\nsentence\\_in\\_long\\_answer and short\\_answer\\_in\\_sentence indicating whether ot not the\ncandidate sentence is contained in the long\\_answer and if the short\\_answer is in the candidate sentence.\n\n\nFor more details please see\nURL\n\n\nand\n\n\nURL", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 3.56 GB\n* Size of the generated dataset: 3.82 GB\n* Total amount of disk used: 7.39 GB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'question': a 'string' feature.\n* 'sentence': a 'string' feature.\n* 'label': a classification label, with possible values including 'neg' (0), 'pos' (1).\n* 'sentence\\_in\\_long\\_answer': a 'bool' feature.\n* 'short\\_answer\\_in\\_sentence': a 'bool' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe data is made available under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License:\nURL", "### Contributions\n\n\nThanks to @mkserge for adding this dataset." ]
[ "TAGS\n#task_categories-multiple-choice #task_ids-multiple-choice-qa #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-extended|natural_questions #language-English #license-cc-by-nc-sa-3.0 #arxiv-1911.04118 #region-us \n", "### Dataset Summary\n\n\nASNQ is a dataset for answer sentence selection derived from\nGoogle's Natural Questions (NQ) dataset (Kwiatkowski et al. 2019).\n\n\nEach example contains a question, candidate sentence, label indicating whether or not\nthe sentence answers the question, and two additional features --\nsentence\\_in\\_long\\_answer and short\\_answer\\_in\\_sentence indicating whether ot not the\ncandidate sentence is contained in the long\\_answer and if the short\\_answer is in the candidate sentence.\n\n\nFor more details please see\nURL\n\n\nand\n\n\nURL", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 3.56 GB\n* Size of the generated dataset: 3.82 GB\n* Total amount of disk used: 7.39 GB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'question': a 'string' feature.\n* 'sentence': a 'string' feature.\n* 'label': a classification label, with possible values including 'neg' (0), 'pos' (1).\n* 'sentence\\_in\\_long\\_answer': a 'bool' feature.\n* 'short\\_answer\\_in\\_sentence': a 'bool' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe data is made available under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License:\nURL", "### Contributions\n\n\nThanks to @mkserge for adding this dataset." ]
[ 113, 138, 10, 11, 6, 50, 17, 96, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 28, 17 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_ids-multiple-choice-qa #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-extended|natural_questions #language-English #license-cc-by-nc-sa-3.0 #arxiv-1911.04118 #region-us \n### Dataset Summary\n\n\nASNQ is a dataset for answer sentence selection derived from\nGoogle's Natural Questions (NQ) dataset (Kwiatkowski et al. 2019).\n\n\nEach example contains a question, candidate sentence, label indicating whether or not\nthe sentence answers the question, and two additional features --\nsentence\\_in\\_long\\_answer and short\\_answer\\_in\\_sentence indicating whether ot not the\ncandidate sentence is contained in the long\\_answer and if the short\\_answer is in the candidate sentence.\n\n\nFor more details please see\nURL\n\n\nand\n\n\nURL### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### default\n\n\n* Size of downloaded dataset files: 3.56 GB\n* Size of the generated dataset: 3.82 GB\n* Total amount of disk used: 7.39 GB\n\n\nAn example of 'validation' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### default\n\n\n* 'question': a 'string' feature.\n* 'sentence': a 'string' feature.\n* 'label': a classification label, with possible values including 'neg' (0), 'pos' (1).\n* 'sentence\\_in\\_long\\_answer': a 'bool' feature.\n* 'short\\_answer\\_in\\_sentence': a 'bool' feature.### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?" ]
c7f2fa4bae55ae656091805d4416c1374582bb4e
# Dataset Card for ASSET ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [ASSET Github repository](https://github.com/facebookresearch/asset) - **Paper:** [ASSET: A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations](https://www.aclweb.org/anthology/2020.acl-main.424/) - **Point of Contact:** [Louis Martin](louismartincs@gmail.com) ### Dataset Summary [ASSET](https://github.com/facebookresearch/asset) [(Alva-Manchego et al., 2020)](https://www.aclweb.org/anthology/2020.acl-main.424.pdf) is multi-reference dataset for the evaluation of sentence simplification in English. The dataset uses the same 2,359 sentences from [TurkCorpus]( https://github.com/cocoxu/simplification/) [(Xu et al., 2016)](https://www.aclweb.org/anthology/Q16-1029.pdf) and each sentence is associated with 10 crowdsourced simplifications. Unlike previous simplification datasets, which contain a single transformation (e.g., lexical paraphrasing in TurkCorpus or sentence splitting in [HSplit](https://www.aclweb.org/anthology/D18-1081.pdf)), the simplifications in ASSET encompass a variety of rewriting transformations. ### Supported Tasks and Leaderboards The dataset supports the evaluation of `text-simplification` systems. Success in this tasks is typically measured using the [SARI](https://huggingface.co/metrics/sari) and [FKBLEU](https://huggingface.co/metrics/fkbleu) metrics described in the paper [Optimizing Statistical Machine Translation for Text Simplification](https://www.aclweb.org/anthology/Q16-1029.pdf). ### Languages The text in this dataset is in English (`en`). ## Dataset Structure ### Data Instances - `simplification` configuration: an instance consists in an original sentence and 10 possible reference simplifications. - `ratings` configuration: a data instance consists in an original sentence, a simplification obtained by an automated system, and a judgment of quality along one of three axes by a crowd worker. ### Data Fields - `original`: an original sentence from the source datasets - `simplifications`: in the `simplification` config, a set of reference simplifications produced by crowd workers. - `simplification`: in the `ratings` config, a simplification of the original obtained by an automated system - `aspect`: in the `ratings` config, the aspect on which the simplification is evaluated, one of `meaning`, `fluency`, `simplicity` - `rating`: a quality rating between 0 and 100 ### Data Splits ASSET does not contain a training set; many models use [WikiLarge](https://github.com/XingxingZhang/dress) (Zhang and Lapata, 2017) for training. Each input sentence has 10 associated reference simplified sentences. The statistics of ASSET are given below. | | Dev | Test | Total | | ----- | ------ | ---- | ----- | | Input Sentences | 2000 | 359 | 2359 | | Reference Simplifications | 20000 | 3590 | 23590 | The test and validation sets are the same as those of TurkCorpus. The split was random. There are 19.04 tokens per reference on average (lower than 21.29 and 25.49 for TurkCorpus and HSplit, respectively). Most (17,245) of the referece sentences do not involve sentence splitting. ## Dataset Creation ### Curation Rationale ASSET was created in order to improve the evaluation of sentence simplification. It uses the same input sentences as the [TurkCorpus]( https://github.com/cocoxu/simplification/) dataset from [(Xu et al., 2016)](https://www.aclweb.org/anthology/Q16-1029.pdf). The 2,359 input sentences of TurkCorpus are a sample of "standard" (not simple) sentences from the [Parallel Wikipedia Simplification (PWKP)](https://www.informatik.tu-darmstadt.de/ukp/research_6/data/sentence_simplification/simple_complex_sentence_pairs/index.en.jsp) dataset [(Zhu et al., 2010)](https://www.aclweb.org/anthology/C10-1152.pdf), which come from the August 22, 2009 version of Wikipedia. The sentences of TurkCorpus were chosen to be of similar length [(Xu et al., 2016)](https://www.aclweb.org/anthology/Q16-1029.pdf). No further information is provided on the sampling strategy. The TurkCorpus dataset was developed in order to overcome some of the problems with sentence pairs from Standard and Simple Wikipedia: a large fraction of sentences were misaligned, or not actually simpler [(Xu et al., 2016)](https://www.aclweb.org/anthology/Q16-1029.pdf). However, TurkCorpus mainly focused on *lexical paraphrasing*, and so cannot be used to evaluate simplifications involving *compression* (deletion) or *sentence splitting*. HSplit [(Sulem et al., 2018)](https://www.aclweb.org/anthology/D18-1081.pdf), on the other hand, can only be used to evaluate sentence splitting. The reference sentences in ASSET include a wider variety of sentence rewriting strategies, combining splitting, compression and paraphrasing. Annotators were given examples of each kind of transformation individually, as well as all three transformations used at once, but were allowed to decide which transformations to use for any given sentence. An example illustrating the differences between TurkCorpus, HSplit and ASSET is given below: > **Original:** He settled in London, devoting himself chiefly to practical teaching. > > **TurkCorpus:** He rooted in London, devoting himself mainly to practical teaching. > > **HSplit:** He settled in London. He devoted himself chiefly to practical teaching. > > **ASSET:** He lived in London. He was a teacher. ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? The input sentences are from English Wikipedia (August 22, 2009 version). No demographic information is available for the writers of these sentences. However, most Wikipedia editors are male (Lam, 2011; Graells-Garrido, 2015), which has an impact on the topics covered (see also [the Wikipedia page on Wikipedia gender bias](https://en.wikipedia.org/wiki/Gender_bias_on_Wikipedia)). In addition, Wikipedia editors are mostly white, young, and from the Northern Hemisphere [(Wikipedia: Systemic bias)](https://en.wikipedia.org/wiki/Wikipedia:Systemic_bias). Reference sentences were written by 42 workers on Amazon Mechanical Turk (AMT). The requirements for being an annotator were: - Passing a Qualification Test (appropriately simplifying sentences). Out of 100 workers, 42 passed the test. - Being a resident of the United States, United Kingdom or Canada. - Having a HIT approval rate over 95%, and over 1000 HITs approved. No other demographic or compensation information is provided in the ASSET paper. ### Annotations #### Annotation process The instructions given to the annotators are available [here](https://github.com/facebookresearch/asset/blob/master/crowdsourcing/AMT_AnnotationInstructions.pdf). #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases The dataset may contain some social biases, as the input sentences are based on Wikipedia. Studies have shown that the English Wikipedia contains both gender biases (Schmahl et al., 2020) and racial biases (Adams et al., 2019). > Adams, Julia, Hannah Brückner, and Cambria Naslund. "Who Counts as a Notable Sociologist on Wikipedia? Gender, Race, and the “Professor Test”." Socius 5 (2019): 2378023118823946. > Schmahl, Katja Geertruida, et al. "Is Wikipedia succeeding in reducing gender bias? Assessing changes in gender bias in Wikipedia using word embeddings." Proceedings of the Fourth Workshop on Natural Language Processing and Computational Social Science. 2020. ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators ASSET was developed by researchers at the University of Sheffield, Inria, Facebook AI Research, and Imperial College London. The work was partly supported by Benoît Sagot's chair in the PRAIRIE institute, funded by the French National Research Agency (ANR) as part of the "Investissements d’avenir" program (reference ANR-19-P3IA-0001). ### Licensing Information [Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/) ### Citation Information ``` @inproceedings{alva-manchego-etal-2020-asset, title = "{ASSET}: {A} Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations", author = "Alva-Manchego, Fernando and Martin, Louis and Bordes, Antoine and Scarton, Carolina and Sagot, Beno{\^\i}t and Specia, Lucia", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.acl-main.424", pages = "4668--4679", } ``` This dataset card uses material written by [Juan Diego Rodriguez](https://github.com/juand-r). ### Contributions Thanks to [@yjernite](https://github.com/yjernite) for adding this dataset.
facebook/asset
[ "task_categories:text-classification", "task_categories:text2text-generation", "task_ids:text-simplification", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "source_datasets:extended|other-turkcorpus", "language:en", "license:cc-by-sa-4.0", "simplification-evaluation", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original", "extended|other-turkcorpus"], "task_categories": ["text-classification", "text2text-generation"], "task_ids": ["text-simplification"], "paperswithcode_id": "asset", "pretty_name": "ASSET", "config_names": ["ratings", "simplification"], "tags": ["simplification-evaluation"], "dataset_info": [{"config_name": "ratings", "features": [{"name": "original", "dtype": "string"}, {"name": "simplification", "dtype": "string"}, {"name": "original_sentence_id", "dtype": "int32"}, {"name": "aspect", "dtype": {"class_label": {"names": {"0": "meaning", "1": "fluency", "2": "simplicity"}}}}, {"name": "worker_id", "dtype": "int32"}, {"name": "rating", "dtype": "int32"}], "splits": [{"name": "full", "num_bytes": 1036845, "num_examples": 4500}], "download_size": 44642, "dataset_size": 1036845}, {"config_name": "simplification", "features": [{"name": "original", "dtype": "string"}, {"name": "simplifications", "sequence": "string"}], "splits": [{"name": "validation", "num_bytes": 2303484, "num_examples": 2000}, {"name": "test", "num_bytes": 411019, "num_examples": 359}], "download_size": 1055163, "dataset_size": 2714503}], "configs": [{"config_name": "ratings", "data_files": [{"split": "full", "path": "ratings/full-*"}]}, {"config_name": "simplification", "data_files": [{"split": "validation", "path": "simplification/validation-*"}, {"split": "test", "path": "simplification/test-*"}], "default": true}]}
2023-12-21T15:41:23+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_categories-text2text-generation #task_ids-text-simplification #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #source_datasets-extended|other-turkcorpus #language-English #license-cc-by-sa-4.0 #simplification-evaluation #region-us
Dataset Card for ASSET ====================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Repository: ASSET Github repository * Paper: ASSET: A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations * Point of Contact: Louis Martin ### Dataset Summary ASSET (Alva-Manchego et al., 2020) is multi-reference dataset for the evaluation of sentence simplification in English. The dataset uses the same 2,359 sentences from TurkCorpus (Xu et al., 2016) and each sentence is associated with 10 crowdsourced simplifications. Unlike previous simplification datasets, which contain a single transformation (e.g., lexical paraphrasing in TurkCorpus or sentence splitting in HSplit), the simplifications in ASSET encompass a variety of rewriting transformations. ### Supported Tasks and Leaderboards The dataset supports the evaluation of 'text-simplification' systems. Success in this tasks is typically measured using the SARI and FKBLEU metrics described in the paper Optimizing Statistical Machine Translation for Text Simplification. ### Languages The text in this dataset is in English ('en'). Dataset Structure ----------------- ### Data Instances * 'simplification' configuration: an instance consists in an original sentence and 10 possible reference simplifications. * 'ratings' configuration: a data instance consists in an original sentence, a simplification obtained by an automated system, and a judgment of quality along one of three axes by a crowd worker. ### Data Fields * 'original': an original sentence from the source datasets * 'simplifications': in the 'simplification' config, a set of reference simplifications produced by crowd workers. * 'simplification': in the 'ratings' config, a simplification of the original obtained by an automated system * 'aspect': in the 'ratings' config, the aspect on which the simplification is evaluated, one of 'meaning', 'fluency', 'simplicity' * 'rating': a quality rating between 0 and 100 ### Data Splits ASSET does not contain a training set; many models use WikiLarge (Zhang and Lapata, 2017) for training. Each input sentence has 10 associated reference simplified sentences. The statistics of ASSET are given below. The test and validation sets are the same as those of TurkCorpus. The split was random. There are 19.04 tokens per reference on average (lower than 21.29 and 25.49 for TurkCorpus and HSplit, respectively). Most (17,245) of the referece sentences do not involve sentence splitting. Dataset Creation ---------------- ### Curation Rationale ASSET was created in order to improve the evaluation of sentence simplification. It uses the same input sentences as the TurkCorpus dataset from (Xu et al., 2016). The 2,359 input sentences of TurkCorpus are a sample of "standard" (not simple) sentences from the Parallel Wikipedia Simplification (PWKP) dataset (Zhu et al., 2010), which come from the August 22, 2009 version of Wikipedia. The sentences of TurkCorpus were chosen to be of similar length (Xu et al., 2016). No further information is provided on the sampling strategy. The TurkCorpus dataset was developed in order to overcome some of the problems with sentence pairs from Standard and Simple Wikipedia: a large fraction of sentences were misaligned, or not actually simpler (Xu et al., 2016). However, TurkCorpus mainly focused on *lexical paraphrasing*, and so cannot be used to evaluate simplifications involving *compression* (deletion) or *sentence splitting*. HSplit (Sulem et al., 2018), on the other hand, can only be used to evaluate sentence splitting. The reference sentences in ASSET include a wider variety of sentence rewriting strategies, combining splitting, compression and paraphrasing. Annotators were given examples of each kind of transformation individually, as well as all three transformations used at once, but were allowed to decide which transformations to use for any given sentence. An example illustrating the differences between TurkCorpus, HSplit and ASSET is given below: > > Original: He settled in London, devoting himself chiefly to practical teaching. > > > TurkCorpus: He rooted in London, devoting himself mainly to practical teaching. > > > HSplit: He settled in London. He devoted himself chiefly to practical teaching. > > > ASSET: He lived in London. He was a teacher. > > > ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? The input sentences are from English Wikipedia (August 22, 2009 version). No demographic information is available for the writers of these sentences. However, most Wikipedia editors are male (Lam, 2011; Graells-Garrido, 2015), which has an impact on the topics covered (see also the Wikipedia page on Wikipedia gender bias). In addition, Wikipedia editors are mostly white, young, and from the Northern Hemisphere (Wikipedia: Systemic bias). Reference sentences were written by 42 workers on Amazon Mechanical Turk (AMT). The requirements for being an annotator were: * Passing a Qualification Test (appropriately simplifying sentences). Out of 100 workers, 42 passed the test. * Being a resident of the United States, United Kingdom or Canada. * Having a HIT approval rate over 95%, and over 1000 HITs approved. No other demographic or compensation information is provided in the ASSET paper. ### Annotations #### Annotation process The instructions given to the annotators are available here. #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases The dataset may contain some social biases, as the input sentences are based on Wikipedia. Studies have shown that the English Wikipedia contains both gender biases (Schmahl et al., 2020) and racial biases (Adams et al., 2019). > > Adams, Julia, Hannah Brückner, and Cambria Naslund. "Who Counts as a Notable Sociologist on Wikipedia? Gender, Race, and the “Professor Test”." Socius 5 (2019): 2378023118823946. > Schmahl, Katja Geertruida, et al. "Is Wikipedia succeeding in reducing gender bias? Assessing changes in gender bias in Wikipedia using word embeddings." Proceedings of the Fourth Workshop on Natural Language Processing and Computational Social Science. 2020. > > > ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. Additional Information ---------------------- ### Dataset Curators ASSET was developed by researchers at the University of Sheffield, Inria, Facebook AI Research, and Imperial College London. The work was partly supported by Benoît Sagot's chair in the PRAIRIE institute, funded by the French National Research Agency (ANR) as part of the "Investissements d’avenir" program (reference ANR-19-P3IA-0001). ### Licensing Information Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) This dataset card uses material written by Juan Diego Rodriguez. ### Contributions Thanks to @yjernite for adding this dataset.
[ "### Dataset Summary\n\n\nASSET (Alva-Manchego et al., 2020) is multi-reference dataset for the evaluation of sentence simplification in English. The dataset uses the same 2,359 sentences from TurkCorpus (Xu et al., 2016) and each sentence is associated with 10 crowdsourced simplifications. Unlike previous simplification datasets, which contain a single transformation (e.g., lexical paraphrasing in TurkCorpus or sentence\nsplitting in HSplit), the simplifications in ASSET encompass a variety of rewriting transformations.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports the evaluation of 'text-simplification' systems. Success in this tasks is typically measured using the SARI and FKBLEU metrics described in the paper Optimizing Statistical Machine Translation for Text Simplification.", "### Languages\n\n\nThe text in this dataset is in English ('en').\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\n* 'simplification' configuration: an instance consists in an original sentence and 10 possible reference simplifications.\n* 'ratings' configuration: a data instance consists in an original sentence, a simplification obtained by an automated system, and a judgment of quality along one of three axes by a crowd worker.", "### Data Fields\n\n\n* 'original': an original sentence from the source datasets\n* 'simplifications': in the 'simplification' config, a set of reference simplifications produced by crowd workers.\n* 'simplification': in the 'ratings' config, a simplification of the original obtained by an automated system\n* 'aspect': in the 'ratings' config, the aspect on which the simplification is evaluated, one of 'meaning', 'fluency', 'simplicity'\n* 'rating': a quality rating between 0 and 100", "### Data Splits\n\n\nASSET does not contain a training set; many models use WikiLarge (Zhang and Lapata, 2017) for training.\n\n\nEach input sentence has 10 associated reference simplified sentences. The statistics of ASSET are given below.\n\n\n\nThe test and validation sets are the same as those of TurkCorpus. The split was random.\n\n\nThere are 19.04 tokens per reference on average (lower than 21.29 and 25.49 for TurkCorpus and HSplit, respectively). Most (17,245) of the referece sentences do not involve sentence splitting.\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nASSET was created in order to improve the evaluation of sentence simplification. It uses the same input sentences as the TurkCorpus dataset from (Xu et al., 2016). The 2,359 input sentences of TurkCorpus are a sample of \"standard\" (not simple) sentences from the Parallel Wikipedia Simplification (PWKP) dataset (Zhu et al., 2010), which come from the August 22, 2009 version of Wikipedia. The sentences of TurkCorpus were chosen to be of similar length (Xu et al., 2016). No further information is provided on the sampling strategy.\n\n\nThe TurkCorpus dataset was developed in order to overcome some of the problems with sentence pairs from Standard and Simple Wikipedia: a large fraction of sentences were misaligned, or not actually simpler (Xu et al., 2016). However, TurkCorpus mainly focused on *lexical paraphrasing*, and so cannot be used to evaluate simplifications involving *compression* (deletion) or *sentence splitting*. HSplit (Sulem et al., 2018), on the other hand, can only be used to evaluate sentence splitting. The reference sentences in ASSET include a wider variety of sentence rewriting strategies, combining splitting, compression and paraphrasing. Annotators were given examples of each kind of transformation individually, as well as all three transformations used at once, but were allowed to decide which transformations to use for any given sentence.\n\n\nAn example illustrating the differences between TurkCorpus, HSplit and ASSET is given below:\n\n\n\n> \n> Original: He settled in London, devoting himself chiefly to practical teaching.\n> \n> \n> TurkCorpus: He rooted in London, devoting himself mainly to practical teaching.\n> \n> \n> HSplit: He settled in London. He devoted himself chiefly to practical teaching.\n> \n> \n> ASSET: He lived in London. He was a teacher.\n> \n> \n>", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?\n\n\nThe input sentences are from English Wikipedia (August 22, 2009 version). No demographic information is available for the writers of these sentences. However, most Wikipedia editors are male (Lam, 2011; Graells-Garrido, 2015), which has an impact on the topics covered (see also the Wikipedia page on Wikipedia gender bias). In addition, Wikipedia editors are mostly white, young, and from the Northern Hemisphere (Wikipedia: Systemic bias).\n\n\nReference sentences were written by 42 workers on Amazon Mechanical Turk (AMT). The requirements for being an annotator were:\n\n\n* Passing a Qualification Test (appropriately simplifying sentences). Out of 100 workers, 42 passed the test.\n* Being a resident of the United States, United Kingdom or Canada.\n* Having a HIT approval rate over 95%, and over 1000 HITs approved.\n\n\nNo other demographic or compensation information is provided in the ASSET paper.", "### Annotations", "#### Annotation process\n\n\nThe instructions given to the annotators are available here.", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases\n\n\nThe dataset may contain some social biases, as the input sentences are based on Wikipedia. Studies have shown that the English Wikipedia contains both gender biases (Schmahl et al., 2020) and racial biases (Adams et al., 2019).\n\n\n\n> \n> Adams, Julia, Hannah Brückner, and Cambria Naslund. \"Who Counts as a Notable Sociologist on Wikipedia? Gender, Race, and the “Professor Test”.\" Socius 5 (2019): 2378023118823946.\n> Schmahl, Katja Geertruida, et al. \"Is Wikipedia succeeding in reducing gender bias? Assessing changes in gender bias in Wikipedia using word embeddings.\" Proceedings of the Fourth Workshop on Natural Language Processing and Computational Social Science. 2020.\n> \n> \n>", "### Other Known Limitations\n\n\nDataset provided for research purposes only. Please check dataset license for additional information.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nASSET was developed by researchers at the University of Sheffield, Inria,\nFacebook AI Research, and Imperial College London. The work was partly supported by Benoît Sagot's chair in the PRAIRIE institute, funded by the French National Research Agency (ANR) as part of the \"Investissements d’avenir\" program (reference ANR-19-P3IA-0001).", "### Licensing Information\n\n\nAttribution-NonCommercial 4.0 International (CC BY-NC 4.0)\n\n\nThis dataset card uses material written by Juan Diego Rodriguez.", "### Contributions\n\n\nThanks to @yjernite for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_categories-text2text-generation #task_ids-text-simplification #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #source_datasets-extended|other-turkcorpus #language-English #license-cc-by-sa-4.0 #simplification-evaluation #region-us \n", "### Dataset Summary\n\n\nASSET (Alva-Manchego et al., 2020) is multi-reference dataset for the evaluation of sentence simplification in English. The dataset uses the same 2,359 sentences from TurkCorpus (Xu et al., 2016) and each sentence is associated with 10 crowdsourced simplifications. Unlike previous simplification datasets, which contain a single transformation (e.g., lexical paraphrasing in TurkCorpus or sentence\nsplitting in HSplit), the simplifications in ASSET encompass a variety of rewriting transformations.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports the evaluation of 'text-simplification' systems. Success in this tasks is typically measured using the SARI and FKBLEU metrics described in the paper Optimizing Statistical Machine Translation for Text Simplification.", "### Languages\n\n\nThe text in this dataset is in English ('en').\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\n* 'simplification' configuration: an instance consists in an original sentence and 10 possible reference simplifications.\n* 'ratings' configuration: a data instance consists in an original sentence, a simplification obtained by an automated system, and a judgment of quality along one of three axes by a crowd worker.", "### Data Fields\n\n\n* 'original': an original sentence from the source datasets\n* 'simplifications': in the 'simplification' config, a set of reference simplifications produced by crowd workers.\n* 'simplification': in the 'ratings' config, a simplification of the original obtained by an automated system\n* 'aspect': in the 'ratings' config, the aspect on which the simplification is evaluated, one of 'meaning', 'fluency', 'simplicity'\n* 'rating': a quality rating between 0 and 100", "### Data Splits\n\n\nASSET does not contain a training set; many models use WikiLarge (Zhang and Lapata, 2017) for training.\n\n\nEach input sentence has 10 associated reference simplified sentences. The statistics of ASSET are given below.\n\n\n\nThe test and validation sets are the same as those of TurkCorpus. The split was random.\n\n\nThere are 19.04 tokens per reference on average (lower than 21.29 and 25.49 for TurkCorpus and HSplit, respectively). Most (17,245) of the referece sentences do not involve sentence splitting.\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nASSET was created in order to improve the evaluation of sentence simplification. It uses the same input sentences as the TurkCorpus dataset from (Xu et al., 2016). The 2,359 input sentences of TurkCorpus are a sample of \"standard\" (not simple) sentences from the Parallel Wikipedia Simplification (PWKP) dataset (Zhu et al., 2010), which come from the August 22, 2009 version of Wikipedia. The sentences of TurkCorpus were chosen to be of similar length (Xu et al., 2016). No further information is provided on the sampling strategy.\n\n\nThe TurkCorpus dataset was developed in order to overcome some of the problems with sentence pairs from Standard and Simple Wikipedia: a large fraction of sentences were misaligned, or not actually simpler (Xu et al., 2016). However, TurkCorpus mainly focused on *lexical paraphrasing*, and so cannot be used to evaluate simplifications involving *compression* (deletion) or *sentence splitting*. HSplit (Sulem et al., 2018), on the other hand, can only be used to evaluate sentence splitting. The reference sentences in ASSET include a wider variety of sentence rewriting strategies, combining splitting, compression and paraphrasing. Annotators were given examples of each kind of transformation individually, as well as all three transformations used at once, but were allowed to decide which transformations to use for any given sentence.\n\n\nAn example illustrating the differences between TurkCorpus, HSplit and ASSET is given below:\n\n\n\n> \n> Original: He settled in London, devoting himself chiefly to practical teaching.\n> \n> \n> TurkCorpus: He rooted in London, devoting himself mainly to practical teaching.\n> \n> \n> HSplit: He settled in London. He devoted himself chiefly to practical teaching.\n> \n> \n> ASSET: He lived in London. He was a teacher.\n> \n> \n>", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?\n\n\nThe input sentences are from English Wikipedia (August 22, 2009 version). No demographic information is available for the writers of these sentences. However, most Wikipedia editors are male (Lam, 2011; Graells-Garrido, 2015), which has an impact on the topics covered (see also the Wikipedia page on Wikipedia gender bias). In addition, Wikipedia editors are mostly white, young, and from the Northern Hemisphere (Wikipedia: Systemic bias).\n\n\nReference sentences were written by 42 workers on Amazon Mechanical Turk (AMT). The requirements for being an annotator were:\n\n\n* Passing a Qualification Test (appropriately simplifying sentences). Out of 100 workers, 42 passed the test.\n* Being a resident of the United States, United Kingdom or Canada.\n* Having a HIT approval rate over 95%, and over 1000 HITs approved.\n\n\nNo other demographic or compensation information is provided in the ASSET paper.", "### Annotations", "#### Annotation process\n\n\nThe instructions given to the annotators are available here.", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases\n\n\nThe dataset may contain some social biases, as the input sentences are based on Wikipedia. Studies have shown that the English Wikipedia contains both gender biases (Schmahl et al., 2020) and racial biases (Adams et al., 2019).\n\n\n\n> \n> Adams, Julia, Hannah Brückner, and Cambria Naslund. \"Who Counts as a Notable Sociologist on Wikipedia? Gender, Race, and the “Professor Test”.\" Socius 5 (2019): 2378023118823946.\n> Schmahl, Katja Geertruida, et al. \"Is Wikipedia succeeding in reducing gender bias? Assessing changes in gender bias in Wikipedia using word embeddings.\" Proceedings of the Fourth Workshop on Natural Language Processing and Computational Social Science. 2020.\n> \n> \n>", "### Other Known Limitations\n\n\nDataset provided for research purposes only. Please check dataset license for additional information.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nASSET was developed by researchers at the University of Sheffield, Inria,\nFacebook AI Research, and Imperial College London. The work was partly supported by Benoît Sagot's chair in the PRAIRIE institute, funded by the French National Research Agency (ANR) as part of the \"Investissements d’avenir\" program (reference ANR-19-P3IA-0001).", "### Licensing Information\n\n\nAttribution-NonCommercial 4.0 International (CC BY-NC 4.0)\n\n\nThis dataset card uses material written by Juan Diego Rodriguez.", "### Contributions\n\n\nThanks to @yjernite for adding this dataset." ]
[ 128, 129, 62, 25, 74, 132, 131, 445, 4, 10, 214, 5, 17, 9, 18, 7, 195, 32, 94, 34, 17 ]
[ "passage: TAGS\n#task_categories-text-classification #task_categories-text2text-generation #task_ids-text-simplification #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #source_datasets-extended|other-turkcorpus #language-English #license-cc-by-sa-4.0 #simplification-evaluation #region-us \n### Dataset Summary\n\n\nASSET (Alva-Manchego et al., 2020) is multi-reference dataset for the evaluation of sentence simplification in English. The dataset uses the same 2,359 sentences from TurkCorpus (Xu et al., 2016) and each sentence is associated with 10 crowdsourced simplifications. Unlike previous simplification datasets, which contain a single transformation (e.g., lexical paraphrasing in TurkCorpus or sentence\nsplitting in HSplit), the simplifications in ASSET encompass a variety of rewriting transformations.### Supported Tasks and Leaderboards\n\n\nThe dataset supports the evaluation of 'text-simplification' systems. Success in this tasks is typically measured using the SARI and FKBLEU metrics described in the paper Optimizing Statistical Machine Translation for Text Simplification.### Languages\n\n\nThe text in this dataset is in English ('en').\n\n\nDataset Structure\n-----------------### Data Instances\n\n\n* 'simplification' configuration: an instance consists in an original sentence and 10 possible reference simplifications.\n* 'ratings' configuration: a data instance consists in an original sentence, a simplification obtained by an automated system, and a judgment of quality along one of three axes by a crowd worker.", "passage: ### Data Fields\n\n\n* 'original': an original sentence from the source datasets\n* 'simplifications': in the 'simplification' config, a set of reference simplifications produced by crowd workers.\n* 'simplification': in the 'ratings' config, a simplification of the original obtained by an automated system\n* 'aspect': in the 'ratings' config, the aspect on which the simplification is evaluated, one of 'meaning', 'fluency', 'simplicity'\n* 'rating': a quality rating between 0 and 100### Data Splits\n\n\nASSET does not contain a training set; many models use WikiLarge (Zhang and Lapata, 2017) for training.\n\n\nEach input sentence has 10 associated reference simplified sentences. The statistics of ASSET are given below.\n\n\n\nThe test and validation sets are the same as those of TurkCorpus. The split was random.\n\n\nThere are 19.04 tokens per reference on average (lower than 21.29 and 25.49 for TurkCorpus and HSplit, respectively). Most (17,245) of the referece sentences do not involve sentence splitting.\n\n\nDataset Creation\n----------------", "passage: ### Curation Rationale\n\n\nASSET was created in order to improve the evaluation of sentence simplification. It uses the same input sentences as the TurkCorpus dataset from (Xu et al., 2016). The 2,359 input sentences of TurkCorpus are a sample of \"standard\" (not simple) sentences from the Parallel Wikipedia Simplification (PWKP) dataset (Zhu et al., 2010), which come from the August 22, 2009 version of Wikipedia. The sentences of TurkCorpus were chosen to be of similar length (Xu et al., 2016). No further information is provided on the sampling strategy.\n\n\nThe TurkCorpus dataset was developed in order to overcome some of the problems with sentence pairs from Standard and Simple Wikipedia: a large fraction of sentences were misaligned, or not actually simpler (Xu et al., 2016). However, TurkCorpus mainly focused on *lexical paraphrasing*, and so cannot be used to evaluate simplifications involving *compression* (deletion) or *sentence splitting*. HSplit (Sulem et al., 2018), on the other hand, can only be used to evaluate sentence splitting. The reference sentences in ASSET include a wider variety of sentence rewriting strategies, combining splitting, compression and paraphrasing. Annotators were given examples of each kind of transformation individually, as well as all three transformations used at once, but were allowed to decide which transformations to use for any given sentence.\n\n\nAn example illustrating the differences between TurkCorpus, HSplit and ASSET is given below:\n\n\n\n> \n> Original: He settled in London, devoting himself chiefly to practical teaching.\n> \n> \n> TurkCorpus: He rooted in London, devoting himself mainly to practical teaching.\n> \n> \n> HSplit: He settled in London. He devoted himself chiefly to practical teaching.\n> \n> \n> ASSET: He lived in London. He was a teacher.\n> \n> \n>### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?\n\n\nThe input sentences are from English Wikipedia (August 22, 2009 version). No demographic information is available for the writers of these sentences. However, most Wikipedia editors are male (Lam, 2011; Graells-Garrido, 2015), which has an impact on the topics covered (see also the Wikipedia page on Wikipedia gender bias). In addition, Wikipedia editors are mostly white, young, and from the Northern Hemisphere (Wikipedia: Systemic bias).\n\n\nReference sentences were written by 42 workers on Amazon Mechanical Turk (AMT). The requirements for being an annotator were:\n\n\n* Passing a Qualification Test (appropriately simplifying sentences). Out of 100 workers, 42 passed the test.\n* Being a resident of the United States, United Kingdom or Canada.\n* Having a HIT approval rate over 95%, and over 1000 HITs approved.\n\n\nNo other demographic or compensation information is provided in the ASSET paper.### Annotations#### Annotation process\n\n\nThe instructions given to the annotators are available here.#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases\n\n\nThe dataset may contain some social biases, as the input sentences are based on Wikipedia. Studies have shown that the English Wikipedia contains both gender biases (Schmahl et al., 2020) and racial biases (Adams et al., 2019).\n\n\n\n> \n> Adams, Julia, Hannah Brückner, and Cambria Naslund. \"Who Counts as a Notable Sociologist on Wikipedia? Gender, Race, and the “Professor Test”.\" Socius 5 (2019): 2378023118823946.\n> Schmahl, Katja Geertruida, et al. \"Is Wikipedia succeeding in reducing gender bias? Assessing changes in gender bias in Wikipedia using word embeddings.\" Proceedings of the Fourth Workshop on Natural Language Processing and Computational Social Science. 2020.\n> \n> \n>" ]
6535e48351178e07ade013b05b69f0e35cb28bbb
# Dataset Card for ASSIN ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [ASSIN homepage](http://nilc.icmc.usp.br/assin/) - **Repository:** [ASSIN repository](http://nilc.icmc.usp.br/assin/) - **Paper:** [ASSIN: Evaluation of Semantic Similarity and Textual Inference](http://propor2016.di.fc.ul.pt/wp-content/uploads/2015/10/assin-overview.pdf) - **Point of Contact:** [Erick Rocha Fonseca](mailto:erickrf@icmc.usp.br) ### Dataset Summary The ASSIN (Avaliação de Similaridade Semântica e INferência textual) corpus is a corpus annotated with pairs of sentences written in Portuguese that is suitable for the exploration of textual entailment and paraphrasing classifiers. The corpus contains pairs of sentences extracted from news articles written in European Portuguese (EP) and Brazilian Portuguese (BP), obtained from Google News Portugal and Brazil, respectively. To create the corpus, the authors started by collecting a set of news articles describing the same event (one news article from Google News Portugal and another from Google News Brazil) from Google News. Then, they employed Latent Dirichlet Allocation (LDA) models to retrieve pairs of similar sentences between sets of news articles that were grouped together around the same topic. For that, two LDA models were trained (for EP and for BP) on external and large-scale collections of unannotated news articles from Portuguese and Brazilian news providers, respectively. Then, the authors defined a lower and upper threshold for the sentence similarity score of the retrieved pairs of sentences, taking into account that high similarity scores correspond to sentences that contain almost the same content (paraphrase candidates), and low similarity scores correspond to sentences that are very different in content from each other (no-relation candidates). From the collection of pairs of sentences obtained at this stage, the authors performed some manual grammatical corrections and discarded some of the pairs wrongly retrieved. Furthermore, from a preliminary analysis made to the retrieved sentence pairs the authors noticed that the number of contradictions retrieved during the previous stage was very low. Additionally, they also noticed that event though paraphrases are not very frequent, they occur with some frequency in news articles. Consequently, in contrast with the majority of the currently available corpora for other languages, which consider as labels “neutral”, “entailment” and “contradiction” for the task of RTE, the authors of the ASSIN corpus decided to use as labels “none”, “entailment” and “paraphrase”. Finally, the manual annotation of pairs of sentences was performed by human annotators. At least four annotators were randomly selected to annotate each pair of sentences, which is done in two steps: (i) assigning a semantic similarity label (a score between 1 and 5, from unrelated to very similar); and (ii) providing an entailment label (one sentence entails the other, sentences are paraphrases, or no relation). Sentence pairs where at least three annotators do not agree on the entailment label were considered controversial and thus discarded from the gold standard annotations. The full dataset has 10,000 sentence pairs, half of which in Brazilian Portuguese (ptbr) and half in European Portuguese (ptpt). Either language variant has 2,500 pairs for training, 500 for validation and 2,000 for testing. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The language supported is Portuguese. ## Dataset Structure ### Data Instances An example from the ASSIN dataset looks as follows: ``` { "entailment_judgment": 0, "hypothesis": "André Gomes entra em campo quatro meses depois de uma lesão na perna esquerda o ter afastado dos relvados.", "premise": "Relembre-se que o atleta estava afastado dos relvados desde maio, altura em que contraiu uma lesão na perna esquerda.", "relatedness_score": 3.5, "sentence_pair_id": 1 } ``` ### Data Fields - `sentence_pair_id`: a `int64` feature. - `premise`: a `string` feature. - `hypothesis`: a `string` feature. - `relatedness_score`: a `float32` feature. - `entailment_judgment`: a classification label, with possible values including `NONE`, `ENTAILMENT`, `PARAPHRASE`. ### Data Splits The data is split into train, validation and test set. The split sizes are as follow: | | Train | Val | Test | | ----- | ------ | ----- | ---- | | full | 5000 | 1000 | 4000 | | ptbr | 2500 | 500 | 2000 | | ptpt | 2500 | 500 | 2000 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{fonseca2016assin, title={ASSIN: Avaliacao de similaridade semantica e inferencia textual}, author={Fonseca, E and Santos, L and Criscuolo, Marcelo and Aluisio, S}, booktitle={Computational Processing of the Portuguese Language-12th International Conference, Tomar, Portugal}, pages={13--15}, year={2016} } ``` ### Contributions Thanks to [@jonatasgrosman](https://github.com/jonatasgrosman) for adding this dataset.
assin
[ "task_categories:text-classification", "task_ids:text-scoring", "task_ids:natural-language-inference", "task_ids:semantic-similarity-scoring", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:pt", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["pt"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["text-scoring", "natural-language-inference", "semantic-similarity-scoring"], "paperswithcode_id": "assin", "pretty_name": "ASSIN", "dataset_info": [{"config_name": "full", "features": [{"name": "sentence_pair_id", "dtype": "int64"}, {"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "relatedness_score", "dtype": "float32"}, {"name": "entailment_judgment", "dtype": {"class_label": {"names": {"0": "NONE", "1": "ENTAILMENT", "2": "PARAPHRASE"}}}}], "splits": [{"name": "train", "num_bytes": 986499, "num_examples": 5000}, {"name": "test", "num_bytes": 767304, "num_examples": 4000}, {"name": "validation", "num_bytes": 196821, "num_examples": 1000}], "download_size": 1335013, "dataset_size": 1950624}, {"config_name": "ptbr", "features": [{"name": "sentence_pair_id", "dtype": "int64"}, {"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "relatedness_score", "dtype": "float32"}, {"name": "entailment_judgment", "dtype": {"class_label": {"names": {"0": "NONE", "1": "ENTAILMENT", "2": "PARAPHRASE"}}}}], "splits": [{"name": "train", "num_bytes": 463505, "num_examples": 2500}, {"name": "test", "num_bytes": 374424, "num_examples": 2000}, {"name": "validation", "num_bytes": 91203, "num_examples": 500}], "download_size": 639490, "dataset_size": 929132}, {"config_name": "ptpt", "features": [{"name": "sentence_pair_id", "dtype": "int64"}, {"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "relatedness_score", "dtype": "float32"}, {"name": "entailment_judgment", "dtype": {"class_label": {"names": {"0": "NONE", "1": "ENTAILMENT", "2": "PARAPHRASE"}}}}], "splits": [{"name": "train", "num_bytes": 522994, "num_examples": 2500}, {"name": "test", "num_bytes": 392880, "num_examples": 2000}, {"name": "validation", "num_bytes": 105618, "num_examples": 500}], "download_size": 706661, "dataset_size": 1021492}], "configs": [{"config_name": "full", "data_files": [{"split": "train", "path": "full/train-*"}, {"split": "test", "path": "full/test-*"}, {"split": "validation", "path": "full/validation-*"}], "default": true}, {"config_name": "ptbr", "data_files": [{"split": "train", "path": "ptbr/train-*"}, {"split": "test", "path": "ptbr/test-*"}, {"split": "validation", "path": "ptbr/validation-*"}]}, {"config_name": "ptpt", "data_files": [{"split": "train", "path": "ptpt/train-*"}, {"split": "test", "path": "ptpt/test-*"}, {"split": "validation", "path": "ptpt/validation-*"}]}]}
2024-01-09T12:47:28+00:00
[]
[ "pt" ]
TAGS #task_categories-text-classification #task_ids-text-scoring #task_ids-natural-language-inference #task_ids-semantic-similarity-scoring #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Portuguese #license-unknown #region-us
Dataset Card for ASSIN ====================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: ASSIN homepage * Repository: ASSIN repository * Paper: ASSIN: Evaluation of Semantic Similarity and Textual Inference * Point of Contact: Erick Rocha Fonseca ### Dataset Summary The ASSIN (Avaliação de Similaridade Semântica e INferência textual) corpus is a corpus annotated with pairs of sentences written in Portuguese that is suitable for the exploration of textual entailment and paraphrasing classifiers. The corpus contains pairs of sentences extracted from news articles written in European Portuguese (EP) and Brazilian Portuguese (BP), obtained from Google News Portugal and Brazil, respectively. To create the corpus, the authors started by collecting a set of news articles describing the same event (one news article from Google News Portugal and another from Google News Brazil) from Google News. Then, they employed Latent Dirichlet Allocation (LDA) models to retrieve pairs of similar sentences between sets of news articles that were grouped together around the same topic. For that, two LDA models were trained (for EP and for BP) on external and large-scale collections of unannotated news articles from Portuguese and Brazilian news providers, respectively. Then, the authors defined a lower and upper threshold for the sentence similarity score of the retrieved pairs of sentences, taking into account that high similarity scores correspond to sentences that contain almost the same content (paraphrase candidates), and low similarity scores correspond to sentences that are very different in content from each other (no-relation candidates). From the collection of pairs of sentences obtained at this stage, the authors performed some manual grammatical corrections and discarded some of the pairs wrongly retrieved. Furthermore, from a preliminary analysis made to the retrieved sentence pairs the authors noticed that the number of contradictions retrieved during the previous stage was very low. Additionally, they also noticed that event though paraphrases are not very frequent, they occur with some frequency in news articles. Consequently, in contrast with the majority of the currently available corpora for other languages, which consider as labels “neutral”, “entailment” and “contradiction” for the task of RTE, the authors of the ASSIN corpus decided to use as labels “none”, “entailment” and “paraphrase”. Finally, the manual annotation of pairs of sentences was performed by human annotators. At least four annotators were randomly selected to annotate each pair of sentences, which is done in two steps: (i) assigning a semantic similarity label (a score between 1 and 5, from unrelated to very similar); and (ii) providing an entailment label (one sentence entails the other, sentences are paraphrases, or no relation). Sentence pairs where at least three annotators do not agree on the entailment label were considered controversial and thus discarded from the gold standard annotations. The full dataset has 10,000 sentence pairs, half of which in Brazilian Portuguese (ptbr) and half in European Portuguese (ptpt). Either language variant has 2,500 pairs for training, 500 for validation and 2,000 for testing. ### Supported Tasks and Leaderboards ### Languages The language supported is Portuguese. Dataset Structure ----------------- ### Data Instances An example from the ASSIN dataset looks as follows: ### Data Fields * 'sentence\_pair\_id': a 'int64' feature. * 'premise': a 'string' feature. * 'hypothesis': a 'string' feature. * 'relatedness\_score': a 'float32' feature. * 'entailment\_judgment': a classification label, with possible values including 'NONE', 'ENTAILMENT', 'PARAPHRASE'. ### Data Splits The data is split into train, validation and test set. The split sizes are as follow: Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @jonatasgrosman for adding this dataset.
[ "### Dataset Summary\n\n\nThe ASSIN (Avaliação de Similaridade Semântica e INferência textual) corpus is a corpus annotated with pairs of sentences written in\nPortuguese that is suitable for the exploration of textual entailment and paraphrasing classifiers. The corpus contains pairs of sentences\nextracted from news articles written in European Portuguese (EP) and Brazilian Portuguese (BP), obtained from Google News Portugal\nand Brazil, respectively. To create the corpus, the authors started by collecting a set of news articles describing the\nsame event (one news article from Google News Portugal and another from Google News Brazil) from Google News.\nThen, they employed Latent Dirichlet Allocation (LDA) models to retrieve pairs of similar sentences between sets of news\narticles that were grouped together around the same topic. For that, two LDA models were trained (for EP and for BP)\non external and large-scale collections of unannotated news articles from Portuguese and Brazilian news providers, respectively.\nThen, the authors defined a lower and upper threshold for the sentence similarity score of the retrieved pairs of sentences,\ntaking into account that high similarity scores correspond to sentences that contain almost the same content (paraphrase candidates),\nand low similarity scores correspond to sentences that are very different in content from each other (no-relation candidates).\nFrom the collection of pairs of sentences obtained at this stage, the authors performed some manual grammatical corrections\nand discarded some of the pairs wrongly retrieved. Furthermore, from a preliminary analysis made to the retrieved sentence pairs\nthe authors noticed that the number of contradictions retrieved during the previous stage was very low. Additionally, they also\nnoticed that event though paraphrases are not very frequent, they occur with some frequency in news articles. Consequently,\nin contrast with the majority of the currently available corpora for other languages, which consider as labels “neutral”, “entailment”\nand “contradiction” for the task of RTE, the authors of the ASSIN corpus decided to use as labels “none”, “entailment” and “paraphrase”.\nFinally, the manual annotation of pairs of sentences was performed by human annotators. At least four annotators were randomly\nselected to annotate each pair of sentences, which is done in two steps: (i) assigning a semantic similarity label (a score between 1 and 5,\nfrom unrelated to very similar); and (ii) providing an entailment label (one sentence entails the other, sentences are paraphrases,\nor no relation). Sentence pairs where at least three annotators do not agree on the entailment label were considered controversial\nand thus discarded from the gold standard annotations. The full dataset has 10,000 sentence pairs, half of which in Brazilian Portuguese (ptbr)\nand half in European Portuguese (ptpt). Either language variant has 2,500 pairs for training, 500 for validation and 2,000 for testing.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe language supported is Portuguese.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example from the ASSIN dataset looks as follows:", "### Data Fields\n\n\n* 'sentence\\_pair\\_id': a 'int64' feature.\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'relatedness\\_score': a 'float32' feature.\n* 'entailment\\_judgment': a classification label, with possible values including 'NONE', 'ENTAILMENT', 'PARAPHRASE'.", "### Data Splits\n\n\nThe data is split into train, validation and test set. The split sizes are as follow:\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @jonatasgrosman for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-text-scoring #task_ids-natural-language-inference #task_ids-semantic-similarity-scoring #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Portuguese #license-unknown #region-us \n", "### Dataset Summary\n\n\nThe ASSIN (Avaliação de Similaridade Semântica e INferência textual) corpus is a corpus annotated with pairs of sentences written in\nPortuguese that is suitable for the exploration of textual entailment and paraphrasing classifiers. The corpus contains pairs of sentences\nextracted from news articles written in European Portuguese (EP) and Brazilian Portuguese (BP), obtained from Google News Portugal\nand Brazil, respectively. To create the corpus, the authors started by collecting a set of news articles describing the\nsame event (one news article from Google News Portugal and another from Google News Brazil) from Google News.\nThen, they employed Latent Dirichlet Allocation (LDA) models to retrieve pairs of similar sentences between sets of news\narticles that were grouped together around the same topic. For that, two LDA models were trained (for EP and for BP)\non external and large-scale collections of unannotated news articles from Portuguese and Brazilian news providers, respectively.\nThen, the authors defined a lower and upper threshold for the sentence similarity score of the retrieved pairs of sentences,\ntaking into account that high similarity scores correspond to sentences that contain almost the same content (paraphrase candidates),\nand low similarity scores correspond to sentences that are very different in content from each other (no-relation candidates).\nFrom the collection of pairs of sentences obtained at this stage, the authors performed some manual grammatical corrections\nand discarded some of the pairs wrongly retrieved. Furthermore, from a preliminary analysis made to the retrieved sentence pairs\nthe authors noticed that the number of contradictions retrieved during the previous stage was very low. Additionally, they also\nnoticed that event though paraphrases are not very frequent, they occur with some frequency in news articles. Consequently,\nin contrast with the majority of the currently available corpora for other languages, which consider as labels “neutral”, “entailment”\nand “contradiction” for the task of RTE, the authors of the ASSIN corpus decided to use as labels “none”, “entailment” and “paraphrase”.\nFinally, the manual annotation of pairs of sentences was performed by human annotators. At least four annotators were randomly\nselected to annotate each pair of sentences, which is done in two steps: (i) assigning a semantic similarity label (a score between 1 and 5,\nfrom unrelated to very similar); and (ii) providing an entailment label (one sentence entails the other, sentences are paraphrases,\nor no relation). Sentence pairs where at least three annotators do not agree on the entailment label were considered controversial\nand thus discarded from the gold standard annotations. The full dataset has 10,000 sentence pairs, half of which in Brazilian Portuguese (ptbr)\nand half in European Portuguese (ptpt). Either language variant has 2,500 pairs for training, 500 for validation and 2,000 for testing.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe language supported is Portuguese.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example from the ASSIN dataset looks as follows:", "### Data Fields\n\n\n* 'sentence\\_pair\\_id': a 'int64' feature.\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'relatedness\\_score': a 'float32' feature.\n* 'entailment\\_judgment': a classification label, with possible values including 'NONE', 'ENTAILMENT', 'PARAPHRASE'.", "### Data Splits\n\n\nThe data is split into train, validation and test set. The split sizes are as follow:\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @jonatasgrosman for adding this dataset." ]
[ 117, 710, 10, 20, 19, 109, 32, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 18 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-text-scoring #task_ids-natural-language-inference #task_ids-semantic-similarity-scoring #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Portuguese #license-unknown #region-us \n" ]
0ff9c86779e06855536d8775ce5550550e1e5a2d
# Dataset Card for ASSIN 2 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [ASSIN 2 homepage](https://sites.google.com/view/assin2) - **Repository:** [ASSIN 2 repository](https://sites.google.com/view/assin2) - **Paper:** [The ASSIN 2 shared task: a quick overview](https://drive.google.com/file/d/1ft1VU6xiVm-N58dfAp6FHWjQ4IvcXgqp/view) - **Point of Contact:** [Livy Real](mailto:livyreal@gmail.com) ### Dataset Summary The ASSIN 2 corpus is composed of rather simple sentences. Following the procedures of SemEval 2014 Task 1. The training and validation data are composed, respectively, of 6,500 and 500 sentence pairs in Brazilian Portuguese, annotated for entailment and semantic similarity. Semantic similarity values range from 1 to 5, and text entailment classes are either entailment or none. The test data are composed of approximately 3,000 sentence pairs with the same annotation. All data were manually annotated. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The language supported is Portuguese. ## Dataset Structure ### Data Instances An example from the ASSIN 2 dataset looks as follows: ``` { "entailment_judgment": 1, "hypothesis": "Uma criança está segurando uma pistola de água", "premise": "Uma criança risonha está segurando uma pistola de água e sendo espirrada com água", "relatedness_score": 4.5, "sentence_pair_id": 1 } ``` ### Data Fields - `sentence_pair_id`: a `int64` feature. - `premise`: a `string` feature. - `hypothesis`: a `string` feature. - `relatedness_score`: a `float32` feature. - `entailment_judgment`: a classification label, with possible values including `NONE`, `ENTAILMENT`. ### Data Splits The data is split into train, validation and test set. The split sizes are as follow: | Train | Val | Test | | ------ | ----- | ---- | | 6500 | 500 | 2448 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{real2020assin, title={The assin 2 shared task: a quick overview}, author={Real, Livy and Fonseca, Erick and Oliveira, Hugo Goncalo}, booktitle={International Conference on Computational Processing of the Portuguese Language}, pages={406--412}, year={2020}, organization={Springer} } ``` ### Contributions Thanks to [@jonatasgrosman](https://github.com/jonatasgrosman) for adding this dataset.
assin2
[ "task_categories:text-classification", "task_ids:text-scoring", "task_ids:natural-language-inference", "task_ids:semantic-similarity-scoring", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:pt", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["pt"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["text-scoring", "natural-language-inference", "semantic-similarity-scoring"], "paperswithcode_id": "assin2", "pretty_name": "ASSIN 2", "dataset_info": {"features": [{"name": "sentence_pair_id", "dtype": "int64"}, {"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "relatedness_score", "dtype": "float32"}, {"name": "entailment_judgment", "dtype": {"class_label": {"names": {"0": "NONE", "1": "ENTAILMENT"}}}}], "splits": [{"name": "train", "num_bytes": 863995, "num_examples": 6500}, {"name": "test", "num_bytes": 339266, "num_examples": 2448}, {"name": "validation", "num_bytes": 66824, "num_examples": 500}], "download_size": 566733, "dataset_size": 1270085}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}]}
2024-01-09T12:48:38+00:00
[]
[ "pt" ]
TAGS #task_categories-text-classification #task_ids-text-scoring #task_ids-natural-language-inference #task_ids-semantic-similarity-scoring #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Portuguese #license-unknown #region-us
Dataset Card for ASSIN 2 ======================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: ASSIN 2 homepage * Repository: ASSIN 2 repository * Paper: The ASSIN 2 shared task: a quick overview * Point of Contact: Livy Real ### Dataset Summary The ASSIN 2 corpus is composed of rather simple sentences. Following the procedures of SemEval 2014 Task 1. The training and validation data are composed, respectively, of 6,500 and 500 sentence pairs in Brazilian Portuguese, annotated for entailment and semantic similarity. Semantic similarity values range from 1 to 5, and text entailment classes are either entailment or none. The test data are composed of approximately 3,000 sentence pairs with the same annotation. All data were manually annotated. ### Supported Tasks and Leaderboards ### Languages The language supported is Portuguese. Dataset Structure ----------------- ### Data Instances An example from the ASSIN 2 dataset looks as follows: ### Data Fields * 'sentence\_pair\_id': a 'int64' feature. * 'premise': a 'string' feature. * 'hypothesis': a 'string' feature. * 'relatedness\_score': a 'float32' feature. * 'entailment\_judgment': a classification label, with possible values including 'NONE', 'ENTAILMENT'. ### Data Splits The data is split into train, validation and test set. The split sizes are as follow: Train: 6500, Val: 500, Test: 2448 Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @jonatasgrosman for adding this dataset.
[ "### Dataset Summary\n\n\nThe ASSIN 2 corpus is composed of rather simple sentences. Following the procedures of SemEval 2014 Task 1.\nThe training and validation data are composed, respectively, of 6,500 and 500 sentence pairs in Brazilian Portuguese,\nannotated for entailment and semantic similarity. Semantic similarity values range from 1 to 5, and text entailment\nclasses are either entailment or none. The test data are composed of approximately 3,000 sentence pairs with the same\nannotation. All data were manually annotated.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe language supported is Portuguese.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example from the ASSIN 2 dataset looks as follows:", "### Data Fields\n\n\n* 'sentence\\_pair\\_id': a 'int64' feature.\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'relatedness\\_score': a 'float32' feature.\n* 'entailment\\_judgment': a classification label, with possible values including 'NONE', 'ENTAILMENT'.", "### Data Splits\n\n\nThe data is split into train, validation and test set. The split sizes are as follow:\n\n\nTrain: 6500, Val: 500, Test: 2448\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @jonatasgrosman for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-text-scoring #task_ids-natural-language-inference #task_ids-semantic-similarity-scoring #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Portuguese #license-unknown #region-us \n", "### Dataset Summary\n\n\nThe ASSIN 2 corpus is composed of rather simple sentences. Following the procedures of SemEval 2014 Task 1.\nThe training and validation data are composed, respectively, of 6,500 and 500 sentence pairs in Brazilian Portuguese,\nannotated for entailment and semantic similarity. Semantic similarity values range from 1 to 5, and text entailment\nclasses are either entailment or none. The test data are composed of approximately 3,000 sentence pairs with the same\nannotation. All data were manually annotated.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe language supported is Portuguese.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example from the ASSIN 2 dataset looks as follows:", "### Data Fields\n\n\n* 'sentence\\_pair\\_id': a 'int64' feature.\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'relatedness\\_score': a 'float32' feature.\n* 'entailment\\_judgment': a classification label, with possible values including 'NONE', 'ENTAILMENT'.", "### Data Splits\n\n\nThe data is split into train, validation and test set. The split sizes are as follow:\n\n\nTrain: 6500, Val: 500, Test: 2448\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @jonatasgrosman for adding this dataset." ]
[ 117, 129, 10, 20, 20, 102, 45, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 18 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-text-scoring #task_ids-natural-language-inference #task_ids-semantic-similarity-scoring #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Portuguese #license-unknown #region-us \n### Dataset Summary\n\n\nThe ASSIN 2 corpus is composed of rather simple sentences. Following the procedures of SemEval 2014 Task 1.\nThe training and validation data are composed, respectively, of 6,500 and 500 sentence pairs in Brazilian Portuguese,\nannotated for entailment and semantic similarity. Semantic similarity values range from 1 to 5, and text entailment\nclasses are either entailment or none. The test data are composed of approximately 3,000 sentence pairs with the same\nannotation. All data were manually annotated.### Supported Tasks and Leaderboards### Languages\n\n\nThe language supported is Portuguese.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nAn example from the ASSIN 2 dataset looks as follows:### Data Fields\n\n\n* 'sentence\\_pair\\_id': a 'int64' feature.\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'relatedness\\_score': a 'float32' feature.\n* 'entailment\\_judgment': a classification label, with possible values including 'NONE', 'ENTAILMENT'.### Data Splits\n\n\nThe data is split into train, validation and test set. The split sizes are as follow:\n\n\nTrain: 6500, Val: 500, Test: 2448\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?" ]
a6ea1d221fa3a5c953b1e69f2594816046bb57c7
# Dataset Card for An Atlas of Machine Commonsense for If-Then Reasoning - Atomic Common Sense Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://homes.cs.washington.edu/~msap/atomic/ - **Repository:** https://homes.cs.washington.edu/~msap/atomic/ - **Paper:** Maarten Sap, Ronan LeBras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A. Smith & Yejin Choi (2019). ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning. AAAI ### Dataset Summary This dataset provides the template sentences and relationships defined in the ATOMIC common sense dataset. There are three splits - train, test, and dev. From the authors. Disclaimer/Content warning: the events in atomic have been automatically extracted from blogs, stories and books written at various times. The events might depict violent or problematic actions, which we left in the corpus for the sake of learning the (probably negative but still important) commonsense implications associated with the events. We removed a small set of truly out-dated events, but might have missed some so please email us (msap@cs.washington.edu) if you have any concerns. For more information, see: https://homes.cs.washington.edu/~msap/atomic/ ### Supported Tasks and Leaderboards [More Information Needed] ### Languages en ## Dataset Structure ### Data Instances Here is one example from the atomic dataset: `` {'event': "PersonX uses PersonX's ___ to obtain", 'oEffect': [], 'oReact': ['annoyed', 'angry', 'worried'], 'oWant': [], 'prefix': ['uses', 'obtain'], 'split': 'trn', 'xAttr': [], 'xEffect': [], 'xIntent': ['to have an advantage', 'to fulfill a desire', 'to get out of trouble'], 'xNeed': [], 'xReact': ['pleased', 'smug', 'excited'], 'xWant': []} `` ### Data Fields Notes from the authors: * event: just a string representation of the event. * oEffect,oReact,oWant,xAttr,xEffect,xIntent,xNeed,xReact,xWant: annotations for each of the dimensions, stored in a json-dumped string. Note: "none" means the worker explicitly responded with the empty response, whereas [] means the worker did not annotate this dimension. * prefix: json-dumped string that represents the prefix of content words (used to make a better trn/dev/tst split). * split: string rep of which split the event belongs to. ### Data Splits The atomic dataset has three splits: test, train and dev of the form: ## Dataset Creation ### Curation Rationale This dataset was gathered and created over to assist in common sense reasoning. ### Source Data #### Initial Data Collection and Normalization See the reaserch paper and website for more detail. The dataset was created by the University of Washington using crowd sourced data #### Who are the source language producers? The Atomic authors and crowd source. ### Annotations #### Annotation process Human annotations directed by forms. #### Who are the annotators? Human annotations. ### Personal and Sensitive Information Unkown, but likely none. ## Considerations for Using the Data ### Social Impact of Dataset The goal for the work is to help machines understand common sense. ### Discussion of Biases Since the data is human annotators, there is likely to be baised. From the authors: Disclaimer/Content warning: the events in atomic have been automatically extracted from blogs, stories and books written at various times. The events might depict violent or problematic actions, which we left in the corpus for the sake of learning the (probably negative but still important) commonsense implications associated with the events. We removed a small set of truly out-dated events, but might have missed some so please email us (msap@cs.washington.edu) if you have any concerns. ### Other Known Limitations While there are many relationships, the data is quite sparse. Also, each item of the dataset could be expanded into multiple sentences along the vsrious dimensions, oEffect, oRect, etc. For example, given event: "PersonX uses PersonX's ___ to obtain" and dimension oReact: "annoyed", this could be transformed into an entry: "PersonX uses PersonX's ___ to obtain => PersonY is annoyed" ## Additional Information ### Dataset Curators The authors of Aotmic at The University of Washington ### Licensing Information The Creative Commons Attribution 4.0 International License. https://creativecommons.org/licenses/by/4.0/ ### Citation Information @article{Sap2019ATOMICAA, title={ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning}, author={Maarten Sap and Ronan Le Bras and Emily Allaway and Chandra Bhagavatula and Nicholas Lourie and Hannah Rashkin and Brendan Roof and Noah A. Smith and Yejin Choi}, journal={ArXiv}, year={2019}, volume={abs/1811.00146} } ### Contributions Thanks to [@ontocord](https://github.com/ontocord) for adding this dataset.
atomic
[ "task_categories:text2text-generation", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-4.0", "common-sense-if-then-reasoning", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": [], "paperswithcode_id": "atomic", "pretty_name": "ATOMIC", "tags": ["common-sense-if-then-reasoning"], "dataset_info": {"features": [{"name": "event", "dtype": "string"}, {"name": "oEffect", "sequence": "string"}, {"name": "oReact", "sequence": "string"}, {"name": "oWant", "sequence": "string"}, {"name": "xAttr", "sequence": "string"}, {"name": "xEffect", "sequence": "string"}, {"name": "xIntent", "sequence": "string"}, {"name": "xNeed", "sequence": "string"}, {"name": "xReact", "sequence": "string"}, {"name": "xWant", "sequence": "string"}, {"name": "prefix", "sequence": "string"}, {"name": "split", "dtype": "string"}], "config_name": "atomic", "splits": [{"name": "train", "num_bytes": 32441878, "num_examples": 202271}, {"name": "test", "num_bytes": 3995624, "num_examples": 24856}, {"name": "validation", "num_bytes": 3629768, "num_examples": 22620}], "download_size": 19083782, "dataset_size": 40067270}}
2024-01-18T11:01:54+00:00
[]
[ "en" ]
TAGS #task_categories-text2text-generation #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #common-sense-if-then-reasoning #region-us
# Dataset Card for An Atlas of Machine Commonsense for If-Then Reasoning - Atomic Common Sense Dataset ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: Maarten Sap, Ronan LeBras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A. Smith & Yejin Choi (2019). ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning. AAAI ### Dataset Summary This dataset provides the template sentences and relationships defined in the ATOMIC common sense dataset. There are three splits - train, test, and dev. From the authors. Disclaimer/Content warning: the events in atomic have been automatically extracted from blogs, stories and books written at various times. The events might depict violent or problematic actions, which we left in the corpus for the sake of learning the (probably negative but still important) commonsense implications associated with the events. We removed a small set of truly out-dated events, but might have missed some so please email us (msap@URL) if you have any concerns. For more information, see: URL ### Supported Tasks and Leaderboards ### Languages en ## Dataset Structure ### Data Instances Here is one example from the atomic dataset: '' {'event': "PersonX uses PersonX's ___ to obtain", 'oEffect': [], 'oReact': ['annoyed', 'angry', 'worried'], 'oWant': [], 'prefix': ['uses', 'obtain'], 'split': 'trn', 'xAttr': [], 'xEffect': [], 'xIntent': ['to have an advantage', 'to fulfill a desire', 'to get out of trouble'], 'xNeed': [], 'xReact': ['pleased', 'smug', 'excited'], 'xWant': []} '' ### Data Fields Notes from the authors: * event: just a string representation of the event. * oEffect,oReact,oWant,xAttr,xEffect,xIntent,xNeed,xReact,xWant: annotations for each of the dimensions, stored in a json-dumped string. Note: "none" means the worker explicitly responded with the empty response, whereas [] means the worker did not annotate this dimension. * prefix: json-dumped string that represents the prefix of content words (used to make a better trn/dev/tst split). * split: string rep of which split the event belongs to. ### Data Splits The atomic dataset has three splits: test, train and dev of the form: ## Dataset Creation ### Curation Rationale This dataset was gathered and created over to assist in common sense reasoning. ### Source Data #### Initial Data Collection and Normalization See the reaserch paper and website for more detail. The dataset was created by the University of Washington using crowd sourced data #### Who are the source language producers? The Atomic authors and crowd source. ### Annotations #### Annotation process Human annotations directed by forms. #### Who are the annotators? Human annotations. ### Personal and Sensitive Information Unkown, but likely none. ## Considerations for Using the Data ### Social Impact of Dataset The goal for the work is to help machines understand common sense. ### Discussion of Biases Since the data is human annotators, there is likely to be baised. From the authors: Disclaimer/Content warning: the events in atomic have been automatically extracted from blogs, stories and books written at various times. The events might depict violent or problematic actions, which we left in the corpus for the sake of learning the (probably negative but still important) commonsense implications associated with the events. We removed a small set of truly out-dated events, but might have missed some so please email us (msap@URL) if you have any concerns. ### Other Known Limitations While there are many relationships, the data is quite sparse. Also, each item of the dataset could be expanded into multiple sentences along the vsrious dimensions, oEffect, oRect, etc. For example, given event: "PersonX uses PersonX's ___ to obtain" and dimension oReact: "annoyed", this could be transformed into an entry: "PersonX uses PersonX's ___ to obtain => PersonY is annoyed" ## Additional Information ### Dataset Curators The authors of Aotmic at The University of Washington ### Licensing Information The Creative Commons Attribution 4.0 International License. URL @article{Sap2019ATOMICAA, title={ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning}, author={Maarten Sap and Ronan Le Bras and Emily Allaway and Chandra Bhagavatula and Nicholas Lourie and Hannah Rashkin and Brendan Roof and Noah A. Smith and Yejin Choi}, journal={ArXiv}, year={2019}, volume={abs/1811.00146} } ### Contributions Thanks to @ontocord for adding this dataset.
[ "# Dataset Card for An Atlas of Machine Commonsense for If-Then Reasoning - Atomic Common Sense Dataset", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\nURL\n- Repository:\nURL\n- Paper:\nMaarten Sap, Ronan LeBras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A. Smith & Yejin Choi (2019). ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning. AAAI", "### Dataset Summary\n\nThis dataset provides the template sentences and\nrelationships defined in the ATOMIC common sense dataset. There are\nthree splits - train, test, and dev.\n\nFrom the authors.\n\nDisclaimer/Content warning: the events in atomic have been\nautomatically extracted from blogs, stories and books written at\nvarious times. The events might depict violent or problematic actions,\nwhich we left in the corpus for the sake of learning the (probably\nnegative but still important) commonsense implications associated with\nthe events. We removed a small set of truly out-dated events, but\nmight have missed some so please email us (msap@URL) if\nyou have any concerns.\n\n\nFor more information, see: URL", "### Supported Tasks and Leaderboards", "### Languages\nen", "## Dataset Structure", "### Data Instances\n\nHere is one example from the atomic dataset:\n\n\n'' \n{'event': \"PersonX uses PersonX's ___ to obtain\", 'oEffect': [], 'oReact': ['annoyed', 'angry', 'worried'], 'oWant': [], 'prefix': ['uses', 'obtain'], 'split': 'trn', 'xAttr': [], 'xEffect': [], 'xIntent': ['to have an advantage', 'to fulfill a desire', 'to get out of trouble'], 'xNeed': [], 'xReact': ['pleased', 'smug', 'excited'], 'xWant': []}\n''", "### Data Fields\n\nNotes from the authors:\n\n* event: just a string representation of the event.\n* oEffect,oReact,oWant,xAttr,xEffect,xIntent,xNeed,xReact,xWant: annotations for each of the dimensions, stored in a json-dumped string.\n Note: \"none\" means the worker explicitly responded with the empty response, whereas [] means the worker did not annotate this dimension.\n* prefix: json-dumped string that represents the prefix of content words (used to make a better trn/dev/tst split).\n* split: string rep of which split the event belongs to.", "### Data Splits\n\nThe atomic dataset has three splits: test, train and dev of the form:", "## Dataset Creation", "### Curation Rationale\n\nThis dataset was gathered and created over to assist in common sense reasoning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nSee the reaserch paper and website for more detail. The dataset was\ncreated by the University of Washington using crowd sourced data", "#### Who are the source language producers?\n\nThe Atomic authors and crowd source.", "### Annotations", "#### Annotation process\n\nHuman annotations directed by forms.", "#### Who are the annotators?\n\nHuman annotations.", "### Personal and Sensitive Information\n\nUnkown, but likely none.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe goal for the work is to help machines understand common sense.", "### Discussion of Biases\n\nSince the data is human annotators, there is likely to be baised. From the authors:\n\n\nDisclaimer/Content warning: the events in atomic have been automatically extracted from blogs, stories and books written at various times. The events might depict violent or problematic actions, which we left in the corpus for the sake of learning the (probably negative but still important) commonsense implications associated with the events. We removed a small set of truly out-dated events, but might have missed some so please email us (msap@URL) if you have any concerns.", "### Other Known Limitations\n\nWhile there are many relationships, the data is quite sparse. Also, each item of the dataset could be expanded into multiple sentences along the vsrious dimensions, oEffect, oRect, etc.\n\nFor example, given event: \"PersonX uses PersonX's ___ to obtain\" and dimension oReact: \"annoyed\", this could be transformed into an entry:\n\n\"PersonX uses PersonX's ___ to obtain => PersonY is annoyed\"", "## Additional Information", "### Dataset Curators\n\nThe authors of Aotmic at The University of Washington", "### Licensing Information\n\nThe Creative Commons Attribution 4.0 International License. URL\n\n\n\n@article{Sap2019ATOMICAA,\n title={ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning},\n author={Maarten Sap and Ronan Le Bras and Emily Allaway and Chandra Bhagavatula and Nicholas Lourie and Hannah Rashkin and Brendan Roof and Noah A. Smith and Yejin Choi},\n journal={ArXiv},\n year={2019},\n volume={abs/1811.00146}\n}", "### Contributions\n\nThanks to @ontocord for adding this dataset." ]
[ "TAGS\n#task_categories-text2text-generation #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #common-sense-if-then-reasoning #region-us \n", "# Dataset Card for An Atlas of Machine Commonsense for If-Then Reasoning - Atomic Common Sense Dataset", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\nURL\n- Repository:\nURL\n- Paper:\nMaarten Sap, Ronan LeBras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A. Smith & Yejin Choi (2019). ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning. AAAI", "### Dataset Summary\n\nThis dataset provides the template sentences and\nrelationships defined in the ATOMIC common sense dataset. There are\nthree splits - train, test, and dev.\n\nFrom the authors.\n\nDisclaimer/Content warning: the events in atomic have been\nautomatically extracted from blogs, stories and books written at\nvarious times. The events might depict violent or problematic actions,\nwhich we left in the corpus for the sake of learning the (probably\nnegative but still important) commonsense implications associated with\nthe events. We removed a small set of truly out-dated events, but\nmight have missed some so please email us (msap@URL) if\nyou have any concerns.\n\n\nFor more information, see: URL", "### Supported Tasks and Leaderboards", "### Languages\nen", "## Dataset Structure", "### Data Instances\n\nHere is one example from the atomic dataset:\n\n\n'' \n{'event': \"PersonX uses PersonX's ___ to obtain\", 'oEffect': [], 'oReact': ['annoyed', 'angry', 'worried'], 'oWant': [], 'prefix': ['uses', 'obtain'], 'split': 'trn', 'xAttr': [], 'xEffect': [], 'xIntent': ['to have an advantage', 'to fulfill a desire', 'to get out of trouble'], 'xNeed': [], 'xReact': ['pleased', 'smug', 'excited'], 'xWant': []}\n''", "### Data Fields\n\nNotes from the authors:\n\n* event: just a string representation of the event.\n* oEffect,oReact,oWant,xAttr,xEffect,xIntent,xNeed,xReact,xWant: annotations for each of the dimensions, stored in a json-dumped string.\n Note: \"none\" means the worker explicitly responded with the empty response, whereas [] means the worker did not annotate this dimension.\n* prefix: json-dumped string that represents the prefix of content words (used to make a better trn/dev/tst split).\n* split: string rep of which split the event belongs to.", "### Data Splits\n\nThe atomic dataset has three splits: test, train and dev of the form:", "## Dataset Creation", "### Curation Rationale\n\nThis dataset was gathered and created over to assist in common sense reasoning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nSee the reaserch paper and website for more detail. The dataset was\ncreated by the University of Washington using crowd sourced data", "#### Who are the source language producers?\n\nThe Atomic authors and crowd source.", "### Annotations", "#### Annotation process\n\nHuman annotations directed by forms.", "#### Who are the annotators?\n\nHuman annotations.", "### Personal and Sensitive Information\n\nUnkown, but likely none.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe goal for the work is to help machines understand common sense.", "### Discussion of Biases\n\nSince the data is human annotators, there is likely to be baised. From the authors:\n\n\nDisclaimer/Content warning: the events in atomic have been automatically extracted from blogs, stories and books written at various times. The events might depict violent or problematic actions, which we left in the corpus for the sake of learning the (probably negative but still important) commonsense implications associated with the events. We removed a small set of truly out-dated events, but might have missed some so please email us (msap@URL) if you have any concerns.", "### Other Known Limitations\n\nWhile there are many relationships, the data is quite sparse. Also, each item of the dataset could be expanded into multiple sentences along the vsrious dimensions, oEffect, oRect, etc.\n\nFor example, given event: \"PersonX uses PersonX's ___ to obtain\" and dimension oReact: \"annoyed\", this could be transformed into an entry:\n\n\"PersonX uses PersonX's ___ to obtain => PersonY is annoyed\"", "## Additional Information", "### Dataset Curators\n\nThe authors of Aotmic at The University of Washington", "### Licensing Information\n\nThe Creative Commons Attribution 4.0 International License. URL\n\n\n\n@article{Sap2019ATOMICAA,\n title={ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning},\n author={Maarten Sap and Ronan Le Bras and Emily Allaway and Chandra Bhagavatula and Nicholas Lourie and Hannah Rashkin and Brendan Roof and Noah A. Smith and Yejin Choi},\n journal={ArXiv},\n year={2019},\n volume={abs/1811.00146}\n}", "### Contributions\n\nThanks to @ontocord for adding this dataset." ]
[ 97, 25, 120, 82, 156, 10, 5, 6, 200, 165, 24, 5, 25, 4, 37, 19, 5, 15, 14, 17, 8, 20, 133, 115, 5, 18, 118, 16 ]
[ "passage: TAGS\n#task_categories-text2text-generation #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #common-sense-if-then-reasoning #region-us \n# Dataset Card for An Atlas of Machine Commonsense for If-Then Reasoning - Atomic Common Sense Dataset## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage:\nURL\n- Repository:\nURL\n- Paper:\nMaarten Sap, Ronan LeBras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A. Smith & Yejin Choi (2019). ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning. AAAI### Dataset Summary\n\nThis dataset provides the template sentences and\nrelationships defined in the ATOMIC common sense dataset. There are\nthree splits - train, test, and dev.\n\nFrom the authors.\n\nDisclaimer/Content warning: the events in atomic have been\nautomatically extracted from blogs, stories and books written at\nvarious times. The events might depict violent or problematic actions,\nwhich we left in the corpus for the sake of learning the (probably\nnegative but still important) commonsense implications associated with\nthe events. We removed a small set of truly out-dated events, but\nmight have missed some so please email us (msap@URL) if\nyou have any concerns.\n\n\nFor more information, see: URL### Supported Tasks and Leaderboards### Languages\nen## Dataset Structure", "passage: ### Data Instances\n\nHere is one example from the atomic dataset:\n\n\n'' \n{'event': \"PersonX uses PersonX's ___ to obtain\", 'oEffect': [], 'oReact': ['annoyed', 'angry', 'worried'], 'oWant': [], 'prefix': ['uses', 'obtain'], 'split': 'trn', 'xAttr': [], 'xEffect': [], 'xIntent': ['to have an advantage', 'to fulfill a desire', 'to get out of trouble'], 'xNeed': [], 'xReact': ['pleased', 'smug', 'excited'], 'xWant': []}\n''### Data Fields\n\nNotes from the authors:\n\n* event: just a string representation of the event.\n* oEffect,oReact,oWant,xAttr,xEffect,xIntent,xNeed,xReact,xWant: annotations for each of the dimensions, stored in a json-dumped string.\n Note: \"none\" means the worker explicitly responded with the empty response, whereas [] means the worker did not annotate this dimension.\n* prefix: json-dumped string that represents the prefix of content words (used to make a better trn/dev/tst split).\n* split: string rep of which split the event belongs to.### Data Splits\n\nThe atomic dataset has three splits: test, train and dev of the form:## Dataset Creation### Curation Rationale\n\nThis dataset was gathered and created over to assist in common sense reasoning.### Source Data#### Initial Data Collection and Normalization\n\nSee the reaserch paper and website for more detail. The dataset was\ncreated by the University of Washington using crowd sourced data#### Who are the source language producers?\n\nThe Atomic authors and crowd source.### Annotations#### Annotation process\n\nHuman annotations directed by forms.#### Who are the annotators?\n\nHuman annotations.### Personal and Sensitive Information\n\nUnkown, but likely none.## Considerations for Using the Data### Social Impact of Dataset\n\nThe goal for the work is to help machines understand common sense.### Discussion of Biases\n\nSince the data is human annotators, there is likely to be baised. From the authors:\n\n\nDisclaimer/Content warning: the events in atomic have been automatically extracted from blogs, stories and books written at various times. The events might depict violent or problematic actions, which we left in the corpus for the sake of learning the (probably negative but still important) commonsense implications associated with the events. We removed a small set of truly out-dated events, but might have missed some so please email us (msap@URL) if you have any concerns." ]
d1951a019d5dedcb8ce47f55bce6328d31f69956
# Dataset Card for autshumato ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://repo.sadilar.org/handle/20.500.12185/7/discover]() - **Repository:** []() - **Paper:** []() - **Leaderboard:** []() - **Point of Contact:** []() ### Dataset Summary Multilingual information access is stipulated in the South African constitution. In practise, this is hampered by a lack of resources and capacity to perform the large volumes of translation work required to realise multilingual information access. One of the aims of the Autshumato project is to develop machine translation systems for three South African languages pairs. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information [More Information Needed] ### Dataset Curators [More Information Needed] ### Licensing Information ### Citation Information ``` @article{groenewald2010processing, title={Processing parallel text corpora for three South African language pairs in the Autshumato project}, author={Groenewald, Hendrik J and du Plooy, Liza}, journal={AfLaT 2010}, pages={27}, year={2010} } ``` ### Contributions Thanks to [@Narsil](https://github.com/Narsil) for adding this dataset.
autshumato
[ "task_categories:translation", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:multilingual", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "language:tn", "language:ts", "language:zu", "license:cc-by-2.5", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en", "tn", "ts", "zu"], "license": ["cc-by-2.5"], "multilinguality": ["multilingual"], "size_categories": ["100K<n<1M", "10K<n<100K"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "autshumato", "config_names": ["autshumato-en-tn", "autshumato-en-ts", "autshumato-en-ts-manual", "autshumato-en-zu", "autshumato-tn", "autshumato-ts"], "dataset_info": [{"config_name": "autshumato-en-tn", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "tn"]}}}], "splits": [{"name": "train", "num_bytes": 28826392, "num_examples": 159000}], "download_size": 9458762, "dataset_size": 28826392}, {"config_name": "autshumato-en-zu", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "zu"]}}}], "splits": [{"name": "train", "num_bytes": 7188970, "num_examples": 35489}], "download_size": 2068891, "dataset_size": 7188970}, {"config_name": "autshumato-en-ts", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "ts"]}}}], "splits": [{"name": "train", "num_bytes": 50803849, "num_examples": 450000}], "download_size": 15145915, "dataset_size": 50803849}, {"config_name": "autshumato-en-ts-manual", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "ts"]}}}], "splits": [{"name": "train", "num_bytes": 10408757, "num_examples": 92396}], "download_size": 2876924, "dataset_size": 10408757}, {"config_name": "autshumato-tn", "features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5132267, "num_examples": 38206}], "download_size": 1599029, "dataset_size": 5132267}, {"config_name": "autshumato-ts", "features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3399674, "num_examples": 58398}], "download_size": 974488, "dataset_size": 3399674}]}
2024-01-18T11:01:55+00:00
[]
[ "en", "tn", "ts", "zu" ]
TAGS #task_categories-translation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-multilingual #size_categories-100K<n<1M #size_categories-10K<n<100K #source_datasets-original #language-English #language-Tswana #language-Tsonga #language-Zulu #license-cc-by-2.5 #region-us
# Dataset Card for autshumato ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: [URL - Repository: []() - Paper: []() - Leaderboard: []() - Point of Contact: []() ### Dataset Summary Multilingual information access is stipulated in the South African constitution. In practise, this is hampered by a lack of resources and capacity to perform the large volumes of translation work required to realise multilingual information access. One of the aims of the Autshumato project is to develop machine translation systems for three South African languages pairs. ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @Narsil for adding this dataset.
[ "# Dataset Card for autshumato", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: [URL\n- Repository: []()\n- Paper: []()\n- Leaderboard: []()\n- Point of Contact: []()", "### Dataset Summary\n\nMultilingual information access is stipulated in the South African constitution. In practise, this\nis hampered by a lack of resources and capacity to perform the large volumes of translation\nwork required to realise multilingual information access. One of the aims of the Autshumato\nproject is to develop machine translation systems for three South African languages pairs.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @Narsil for adding this dataset." ]
[ "TAGS\n#task_categories-translation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-multilingual #size_categories-100K<n<1M #size_categories-10K<n<100K #source_datasets-original #language-English #language-Tswana #language-Tsonga #language-Zulu #license-cc-by-2.5 #region-us \n", "# Dataset Card for autshumato", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: [URL\n- Repository: []()\n- Paper: []()\n- Leaderboard: []()\n- Point of Contact: []()", "### Dataset Summary\n\nMultilingual information access is stipulated in the South African constitution. In practise, this\nis hampered by a lack of resources and capacity to perform the large volumes of translation\nwork required to realise multilingual information access. One of the aims of the Autshumato\nproject is to develop machine translation systems for three South African languages pairs.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @Narsil for adding this dataset." ]
[ 109, 8, 120, 42, 81, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 16 ]
[ "passage: TAGS\n#task_categories-translation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-multilingual #size_categories-100K<n<1M #size_categories-10K<n<100K #source_datasets-original #language-English #language-Tswana #language-Tsonga #language-Zulu #license-cc-by-2.5 #region-us \n# Dataset Card for autshumato## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: [URL\n- Repository: []()\n- Paper: []()\n- Leaderboard: []()\n- Point of Contact: []()### Dataset Summary\n\nMultilingual information access is stipulated in the South African constitution. In practise, this\nis hampered by a lack of resources and capacity to perform the large volumes of translation\nwork required to realise multilingual information access. One of the aims of the Autshumato\nproject is to develop machine translation systems for three South African languages pairs.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information" ]
021d7aeb7307b7856dd0632f92827bc607dc2f1b
# Dataset Card for bAbi QA ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:**[The bAbI project](https://research.fb.com/downloads/babi/) - **Repository:** - **Paper:** [arXiv Paper](https://arxiv.org/pdf/1502.05698.pdf) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The (20) QA bAbI tasks are a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. The aim is to classify these tasks into skill sets,so that researchers can identify (and then rectify) the failings of their systems. ### Supported Tasks and Leaderboards The dataset supports a set of 20 proxy story-based question answering tasks for various "types" in English and Hindi. The tasks are: |task_no|task_name| |----|------------| |qa1 |single-supporting-fact| |qa2 |two-supporting-facts| |qa3 |three-supporting-facts| |qa4 |two-arg-relations| |qa5 |three-arg-relations| |qa6 |yes-no-questions| |qa7 |counting| |qa8 |lists-sets| |qa9 |simple-negation| |qa10| indefinite-knowledge| |qa11| basic-coreference| |qa12| conjunction| |qa13| compound-coreference| |qa14| time-reasoning| |qa15| basic-deduction| |qa16| basic-induction| |qa17| positional-reasoning| |qa18| size-reasoning| |qa19| path-finding| |qa20| agents-motivations| The "types" are are: - `en` - the tasks in English, readable by humans. - `hn` - the tasks in Hindi, readable by humans. - `shuffled` - the same tasks with shuffled letters so they are not readable by humans, and for existing parsers and taggers cannot be used in a straight-forward fashion to leverage extra resources-- in this case the learner is more forced to rely on the given training data. This mimics a learner being first presented a language and having to learn from scratch. - `en-10k`, `shuffled-10k` and `hn-10k` - the same tasks in the three formats, but with 10,000 training examples, rather than 1000 training examples. - `en-valid` and `en-valid-10k` - are the same as `en` and `en10k` except the train sets have been conveniently split into train and valid portions (90% and 10% split). To get a particular dataset, use `load_dataset('babi_qa',type=f'{type}',task_no=f'{task_no}')` where `type` is one of the types, and `task_no` is one of the task numbers. For example, `load_dataset('babi_qa', type='en', task_no='qa1')`. ### Languages ## Dataset Structure ### Data Instances An instance from the `en-qa1` config's `train` split: ``` {'story': {'answer': ['', '', 'bathroom', '', '', 'hallway', '', '', 'hallway', '', '', 'office', '', '', 'bathroom'], 'id': ['1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15'], 'supporting_ids': [[], [], ['1'], [], [], ['4'], [], [], ['4'], [], [], ['11'], [], [], ['8']], 'text': ['Mary moved to the bathroom.', 'John went to the hallway.', 'Where is Mary?', 'Daniel went back to the hallway.', 'Sandra moved to the garden.', 'Where is Daniel?', 'John moved to the office.', 'Sandra journeyed to the bathroom.', 'Where is Daniel?', 'Mary moved to the hallway.', 'Daniel travelled to the office.', 'Where is Daniel?', 'John went back to the garden.', 'John moved to the bedroom.', 'Where is Sandra?'], 'type': [0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1]}} ``` ### Data Fields - `story`: a dictionary feature containing: - `id`: a `string` feature, which denotes the line number in the example. - `type`: a classification label, with possible values including `context`, `question`, denoting whether the text is context or a question. - `text`: a `string` feature the text present, whether it is a question or context. - `supporting_ids`: a `list` of `string` features containing the line numbers of the lines in the example which support the answer. - `answer`: a `string` feature containing the answer to the question, or an empty string if the `type`s is not `question`. ### Data Splits The splits and corresponding sizes are: | | train | test | validation | |-------------------|---------|--------|--------------| | en-qa1 | 200 | 200 | - | | en-qa2 | 200 | 200 | - | | en-qa3 | 200 | 200 | - | | en-qa4 | 1000 | 1000 | - | | en-qa5 | 200 | 200 | - | | en-qa6 | 200 | 200 | - | | en-qa7 | 200 | 200 | - | | en-qa8 | 200 | 200 | - | | en-qa9 | 200 | 200 | - | | en-qa10 | 200 | 200 | - | | en-qa11 | 200 | 200 | - | | en-qa12 | 200 | 200 | - | | en-qa13 | 200 | 200 | - | | en-qa14 | 200 | 200 | - | | en-qa15 | 250 | 250 | - | | en-qa16 | 1000 | 1000 | - | | en-qa17 | 125 | 125 | - | | en-qa18 | 198 | 199 | - | | en-qa19 | 1000 | 1000 | - | | en-qa20 | 94 | 93 | - | | en-10k-qa1 | 2000 | 200 | - | | en-10k-qa2 | 2000 | 200 | - | | en-10k-qa3 | 2000 | 200 | - | | en-10k-qa4 | 10000 | 1000 | - | | en-10k-qa5 | 2000 | 200 | - | | en-10k-qa6 | 2000 | 200 | - | | en-10k-qa7 | 2000 | 200 | - | | en-10k-qa8 | 2000 | 200 | - | | en-10k-qa9 | 2000 | 200 | - | | en-10k-qa10 | 2000 | 200 | - | | en-10k-qa11 | 2000 | 200 | - | | en-10k-qa12 | 2000 | 200 | - | | en-10k-qa13 | 2000 | 200 | - | | en-10k-qa14 | 2000 | 200 | - | | en-10k-qa15 | 2500 | 250 | - | | en-10k-qa16 | 10000 | 1000 | - | | en-10k-qa17 | 1250 | 125 | - | | en-10k-qa18 | 1978 | 199 | - | | en-10k-qa19 | 10000 | 1000 | - | | en-10k-qa20 | 933 | 93 | - | | en-valid-qa1 | 180 | 200 | 20 | | en-valid-qa2 | 180 | 200 | 20 | | en-valid-qa3 | 180 | 200 | 20 | | en-valid-qa4 | 900 | 1000 | 100 | | en-valid-qa5 | 180 | 200 | 20 | | en-valid-qa6 | 180 | 200 | 20 | | en-valid-qa7 | 180 | 200 | 20 | | en-valid-qa8 | 180 | 200 | 20 | | en-valid-qa9 | 180 | 200 | 20 | | en-valid-qa10 | 180 | 200 | 20 | | en-valid-qa11 | 180 | 200 | 20 | | en-valid-qa12 | 180 | 200 | 20 | | en-valid-qa13 | 180 | 200 | 20 | | en-valid-qa14 | 180 | 200 | 20 | | en-valid-qa15 | 225 | 250 | 25 | | en-valid-qa16 | 900 | 1000 | 100 | | en-valid-qa17 | 113 | 125 | 12 | | en-valid-qa18 | 179 | 199 | 19 | | en-valid-qa19 | 900 | 1000 | 100 | | en-valid-qa20 | 85 | 93 | 9 | | en-valid-10k-qa1 | 1800 | 200 | 200 | | en-valid-10k-qa2 | 1800 | 200 | 200 | | en-valid-10k-qa3 | 1800 | 200 | 200 | | en-valid-10k-qa4 | 9000 | 1000 | 1000 | | en-valid-10k-qa5 | 1800 | 200 | 200 | | en-valid-10k-qa6 | 1800 | 200 | 200 | | en-valid-10k-qa7 | 1800 | 200 | 200 | | en-valid-10k-qa8 | 1800 | 200 | 200 | | en-valid-10k-qa9 | 1800 | 200 | 200 | | en-valid-10k-qa10 | 1800 | 200 | 200 | | en-valid-10k-qa11 | 1800 | 200 | 200 | | en-valid-10k-qa12 | 1800 | 200 | 200 | | en-valid-10k-qa13 | 1800 | 200 | 200 | | en-valid-10k-qa14 | 1800 | 200 | 200 | | en-valid-10k-qa15 | 2250 | 250 | 250 | | en-valid-10k-qa16 | 9000 | 1000 | 1000 | | en-valid-10k-qa17 | 1125 | 125 | 125 | | en-valid-10k-qa18 | 1781 | 199 | 197 | | en-valid-10k-qa19 | 9000 | 1000 | 1000 | | en-valid-10k-qa20 | 840 | 93 | 93 | | hn-qa1 | 200 | 200 | - | | hn-qa2 | 200 | 200 | - | | hn-qa3 | 167 | 167 | - | | hn-qa4 | 1000 | 1000 | - | | hn-qa5 | 200 | 200 | - | | hn-qa6 | 200 | 200 | - | | hn-qa7 | 200 | 200 | - | | hn-qa8 | 200 | 200 | - | | hn-qa9 | 200 | 200 | - | | hn-qa10 | 200 | 200 | - | | hn-qa11 | 200 | 200 | - | | hn-qa12 | 200 | 200 | - | | hn-qa13 | 125 | 125 | - | | hn-qa14 | 200 | 200 | - | | hn-qa15 | 250 | 250 | - | | hn-qa16 | 1000 | 1000 | - | | hn-qa17 | 125 | 125 | - | | hn-qa18 | 198 | 198 | - | | hn-qa19 | 1000 | 1000 | - | | hn-qa20 | 93 | 94 | - | | hn-10k-qa1 | 2000 | 200 | - | | hn-10k-qa2 | 2000 | 200 | - | | hn-10k-qa3 | 1667 | 167 | - | | hn-10k-qa4 | 10000 | 1000 | - | | hn-10k-qa5 | 2000 | 200 | - | | hn-10k-qa6 | 2000 | 200 | - | | hn-10k-qa7 | 2000 | 200 | - | | hn-10k-qa8 | 2000 | 200 | - | | hn-10k-qa9 | 2000 | 200 | - | | hn-10k-qa10 | 2000 | 200 | - | | hn-10k-qa11 | 2000 | 200 | - | | hn-10k-qa12 | 2000 | 200 | - | | hn-10k-qa13 | 1250 | 125 | - | | hn-10k-qa14 | 2000 | 200 | - | | hn-10k-qa15 | 2500 | 250 | - | | hn-10k-qa16 | 10000 | 1000 | - | | hn-10k-qa17 | 1250 | 125 | - | | hn-10k-qa18 | 1977 | 198 | - | | hn-10k-qa19 | 10000 | 1000 | - | | hn-10k-qa20 | 934 | 94 | - | | shuffled-qa1 | 200 | 200 | - | | shuffled-qa2 | 200 | 200 | - | | shuffled-qa3 | 200 | 200 | - | | shuffled-qa4 | 1000 | 1000 | - | | shuffled-qa5 | 200 | 200 | - | | shuffled-qa6 | 200 | 200 | - | | shuffled-qa7 | 200 | 200 | - | | shuffled-qa8 | 200 | 200 | - | | shuffled-qa9 | 200 | 200 | - | | shuffled-qa10 | 200 | 200 | - | | shuffled-qa11 | 200 | 200 | - | | shuffled-qa12 | 200 | 200 | - | | shuffled-qa13 | 200 | 200 | - | | shuffled-qa14 | 200 | 200 | - | | shuffled-qa15 | 250 | 250 | - | | shuffled-qa16 | 1000 | 1000 | - | | shuffled-qa17 | 125 | 125 | - | | shuffled-qa18 | 198 | 199 | - | | shuffled-qa19 | 1000 | 1000 | - | | shuffled-qa20 | 94 | 93 | - | | shuffled-10k-qa1 | 2000 | 200 | - | | shuffled-10k-qa2 | 2000 | 200 | - | | shuffled-10k-qa3 | 2000 | 200 | - | | shuffled-10k-qa4 | 10000 | 1000 | - | | shuffled-10k-qa5 | 2000 | 200 | - | | shuffled-10k-qa6 | 2000 | 200 | - | | shuffled-10k-qa7 | 2000 | 200 | - | | shuffled-10k-qa8 | 2000 | 200 | - | | shuffled-10k-qa9 | 2000 | 200 | - | | shuffled-10k-qa10 | 2000 | 200 | - | | shuffled-10k-qa11 | 2000 | 200 | - | | shuffled-10k-qa12 | 2000 | 200 | - | | shuffled-10k-qa13 | 2000 | 200 | - | | shuffled-10k-qa14 | 2000 | 200 | - | | shuffled-10k-qa15 | 2500 | 250 | - | | shuffled-10k-qa16 | 10000 | 1000 | - | | shuffled-10k-qa17 | 1250 | 125 | - | | shuffled-10k-qa18 | 1978 | 199 | - | | shuffled-10k-qa19 | 10000 | 1000 | - | | shuffled-10k-qa20 | 933 | 93 | - | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization Code to generate tasks is available on [github](https://github.com/facebook/bAbI-tasks) #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Jesse Dodge and Andreea Gane and Xiang Zhang and Antoine Bordes and Sumit Chopra and Alexander Miller and Arthur Szlam and Jason Weston, at Facebook Research. ### Licensing Information ``` Creative Commons Attribution 3.0 License ``` ### Citation Information ``` @misc{dodge2016evaluating, title={Evaluating Prerequisite Qualities for Learning End-to-End Dialog Systems}, author={Jesse Dodge and Andreea Gane and Xiang Zhang and Antoine Bordes and Sumit Chopra and Alexander Miller and Arthur Szlam and Jason Weston}, year={2016}, eprint={1511.06931}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@gchhablani](https://github.com/gchhablani) for adding this dataset.
facebook/babi_qa
[ "task_categories:question-answering", "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "size_categories:1K<n<10K", "size_categories:n<1K", "source_datasets:original", "language:en", "license:cc-by-3.0", "chained-qa", "arxiv:1502.05698", "arxiv:1511.06931", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["cc-by-3.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K", "1K<n<10K", "n<1K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": [], "paperswithcode_id": "babi-1", "pretty_name": "BabiQa", "configs": ["en-10k-qa1", "en-10k-qa10", "en-10k-qa11", "en-10k-qa12", "en-10k-qa13", "en-10k-qa14", "en-10k-qa15", "en-10k-qa16", "en-10k-qa17", "en-10k-qa18", "en-10k-qa19", "en-10k-qa2", "en-10k-qa20", "en-10k-qa3", "en-10k-qa4", "en-10k-qa5", "en-10k-qa6", "en-10k-qa7", "en-10k-qa8", "en-10k-qa9", "en-qa1", "en-qa10", "en-qa11", "en-qa12", "en-qa13", "en-qa14", "en-qa15", "en-qa16", "en-qa17", "en-qa18", "en-qa19", "en-qa2", "en-qa20", "en-qa3", "en-qa4", "en-qa5", "en-qa6", "en-qa7", "en-qa8", "en-qa9", "en-valid-10k-qa1", "en-valid-10k-qa10", "en-valid-10k-qa11", "en-valid-10k-qa12", "en-valid-10k-qa13", "en-valid-10k-qa14", "en-valid-10k-qa15", "en-valid-10k-qa16", "en-valid-10k-qa17", "en-valid-10k-qa18", "en-valid-10k-qa19", "en-valid-10k-qa2", "en-valid-10k-qa20", "en-valid-10k-qa3", "en-valid-10k-qa4", "en-valid-10k-qa5", "en-valid-10k-qa6", "en-valid-10k-qa7", "en-valid-10k-qa8", "en-valid-10k-qa9", "en-valid-qa1", "en-valid-qa10", "en-valid-qa11", "en-valid-qa12", "en-valid-qa13", "en-valid-qa14", "en-valid-qa15", "en-valid-qa16", "en-valid-qa17", "en-valid-qa18", "en-valid-qa19", "en-valid-qa2", "en-valid-qa20", "en-valid-qa3", "en-valid-qa4", "en-valid-qa5", "en-valid-qa6", "en-valid-qa7", "en-valid-qa8", "en-valid-qa9", "hn-10k-qa1", "hn-10k-qa10", "hn-10k-qa11", "hn-10k-qa12", "hn-10k-qa13", "hn-10k-qa14", "hn-10k-qa15", "hn-10k-qa16", "hn-10k-qa17", "hn-10k-qa18", "hn-10k-qa19", "hn-10k-qa2", "hn-10k-qa20", "hn-10k-qa3", "hn-10k-qa4", "hn-10k-qa5", "hn-10k-qa6", "hn-10k-qa7", "hn-10k-qa8", "hn-10k-qa9", "hn-qa1", "hn-qa10", "hn-qa11", "hn-qa12", "hn-qa13", "hn-qa14", "hn-qa15", "hn-qa16", "hn-qa17", "hn-qa18", "hn-qa19", "hn-qa2", "hn-qa20", "hn-qa3", "hn-qa4", "hn-qa5", "hn-qa6", "hn-qa7", "hn-qa8", "hn-qa9", "shuffled-10k-qa1", "shuffled-10k-qa10", "shuffled-10k-qa11", "shuffled-10k-qa12", "shuffled-10k-qa13", "shuffled-10k-qa14", "shuffled-10k-qa15", "shuffled-10k-qa16", "shuffled-10k-qa17", "shuffled-10k-qa18", "shuffled-10k-qa19", "shuffled-10k-qa2", "shuffled-10k-qa20", "shuffled-10k-qa3", "shuffled-10k-qa4", "shuffled-10k-qa5", "shuffled-10k-qa6", "shuffled-10k-qa7", "shuffled-10k-qa8", "shuffled-10k-qa9", "shuffled-qa1", "shuffled-qa10", "shuffled-qa11", "shuffled-qa12", "shuffled-qa13", "shuffled-qa14", "shuffled-qa15", "shuffled-qa16", "shuffled-qa17", "shuffled-qa18", "shuffled-qa19", "shuffled-qa2", "shuffled-qa20", "shuffled-qa3", "shuffled-qa4", "shuffled-qa5", "shuffled-qa6", "shuffled-qa7", "shuffled-qa8", "shuffled-qa9"], "tags": ["chained-qa"], "dataset_info": [{"config_name": "en-qa1", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 165386, "num_examples": 200}, {"name": "test", "num_bytes": 165517, "num_examples": 200}], "download_size": 15719851, "dataset_size": 330903}, {"config_name": "en-qa2", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 302888, "num_examples": 200}, {"name": "test", "num_bytes": 306631, "num_examples": 200}], "download_size": 15719851, "dataset_size": 609519}, {"config_name": "en-qa3", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 887756, "num_examples": 200}, {"name": "test", "num_bytes": 883187, "num_examples": 200}], "download_size": 15719851, "dataset_size": 1770943}, {"config_name": "en-qa4", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 205510, "num_examples": 1000}, {"name": "test", "num_bytes": 205434, "num_examples": 1000}], "download_size": 15719851, "dataset_size": 410944}, {"config_name": "en-qa5", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 337349, "num_examples": 200}, {"name": "test", "num_bytes": 350457, "num_examples": 200}], "download_size": 15719851, "dataset_size": 687806}, {"config_name": "en-qa6", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 173053, "num_examples": 200}, {"name": "test", "num_bytes": 172249, "num_examples": 200}], "download_size": 15719851, "dataset_size": 345302}, {"config_name": "en-qa7", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 224778, "num_examples": 200}, {"name": "test", "num_bytes": 215512, "num_examples": 200}], "download_size": 15719851, "dataset_size": 440290}, {"config_name": "en-qa8", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 212517, "num_examples": 200}, {"name": "test", "num_bytes": 216244, "num_examples": 200}], "download_size": 15719851, "dataset_size": 428761}, {"config_name": "en-qa9", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 168350, "num_examples": 200}, {"name": "test", "num_bytes": 168248, "num_examples": 200}], "download_size": 15719851, "dataset_size": 336598}, {"config_name": "en-qa10", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 170257, "num_examples": 200}, {"name": "test", "num_bytes": 170672, "num_examples": 200}], "download_size": 15719851, "dataset_size": 340929}, {"config_name": "en-qa11", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 178560, "num_examples": 200}, {"name": "test", "num_bytes": 178840, "num_examples": 200}], "download_size": 15719851, "dataset_size": 357400}, {"config_name": "en-qa12", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 185600, "num_examples": 200}, {"name": "test", "num_bytes": 185529, "num_examples": 200}], "download_size": 15719851, "dataset_size": 371129}, {"config_name": "en-qa13", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 190556, "num_examples": 200}, {"name": "test", "num_bytes": 190484, "num_examples": 200}], "download_size": 15719851, "dataset_size": 381040}, {"config_name": "en-qa14", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 234355, "num_examples": 200}, {"name": "test", "num_bytes": 233204, "num_examples": 200}], "download_size": 15719851, "dataset_size": 467559}, {"config_name": "en-qa15", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 163728, "num_examples": 250}, {"name": "test", "num_bytes": 163809, "num_examples": 250}], "download_size": 15719851, "dataset_size": 327537}, {"config_name": "en-qa16", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 456374, "num_examples": 1000}, {"name": "test", "num_bytes": 456248, "num_examples": 1000}], "download_size": 15719851, "dataset_size": 912622}, {"config_name": "en-qa17", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 103636, "num_examples": 125}, {"name": "test", "num_bytes": 103618, "num_examples": 125}], "download_size": 15719851, "dataset_size": 207254}, {"config_name": "en-qa18", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 162875, "num_examples": 198}, {"name": "test", "num_bytes": 161266, "num_examples": 199}], "download_size": 15719851, "dataset_size": 324141}, {"config_name": "en-qa19", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 404536, "num_examples": 1000}, {"name": "test", "num_bytes": 404489, "num_examples": 1000}], "download_size": 15719851, "dataset_size": 809025}, {"config_name": "en-qa20", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 115812, "num_examples": 94}, {"name": "test", "num_bytes": 115863, "num_examples": 93}], "download_size": 15719851, "dataset_size": 231675}, {"config_name": "hn-qa1", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 168605, "num_examples": 200}, {"name": "test", "num_bytes": 168572, "num_examples": 200}], "download_size": 15719851, "dataset_size": 337177}, {"config_name": "hn-qa2", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 296391, "num_examples": 200}, {"name": "test", "num_bytes": 288429, "num_examples": 200}], "download_size": 15719851, "dataset_size": 584820}, {"config_name": "hn-qa3", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 842184, "num_examples": 167}, {"name": "test", "num_bytes": 808460, "num_examples": 167}], "download_size": 15719851, "dataset_size": 1650644}, {"config_name": "hn-qa4", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 231303, "num_examples": 1000}, {"name": "test", "num_bytes": 231230, "num_examples": 1000}], "download_size": 15719851, "dataset_size": 462533}, {"config_name": "hn-qa5", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 320859, "num_examples": 200}, {"name": "test", "num_bytes": 315396, "num_examples": 200}], "download_size": 15719851, "dataset_size": 636255}, {"config_name": "hn-qa6", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 170796, "num_examples": 200}, {"name": "test", "num_bytes": 171360, "num_examples": 200}], "download_size": 15719851, "dataset_size": 342156}, {"config_name": "hn-qa7", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 206981, "num_examples": 200}, {"name": "test", "num_bytes": 208080, "num_examples": 200}], "download_size": 15719851, "dataset_size": 415061}, {"config_name": "hn-qa8", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 211584, "num_examples": 200}, {"name": "test", "num_bytes": 222232, "num_examples": 200}], "download_size": 15719851, "dataset_size": 433816}, {"config_name": "hn-qa9", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 187718, "num_examples": 200}, {"name": "test", "num_bytes": 187341, "num_examples": 200}], "download_size": 15719851, "dataset_size": 375059}, {"config_name": "hn-qa10", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 183583, "num_examples": 200}, {"name": "test", "num_bytes": 182932, "num_examples": 200}], "download_size": 15719851, "dataset_size": 366515}, {"config_name": "hn-qa11", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 179698, "num_examples": 200}, {"name": "test", "num_bytes": 180461, "num_examples": 200}], "download_size": 15719851, "dataset_size": 360159}, {"config_name": "hn-qa12", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 187731, "num_examples": 200}, {"name": "test", "num_bytes": 187954, "num_examples": 200}], "download_size": 15719851, "dataset_size": 375685}, {"config_name": "hn-qa13", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 191395, "num_examples": 125}, {"name": "test", "num_bytes": 191747, "num_examples": 125}], "download_size": 15719851, "dataset_size": 383142}, {"config_name": "hn-qa14", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 240659, "num_examples": 200}, {"name": "test", "num_bytes": 240436, "num_examples": 200}], "download_size": 15719851, "dataset_size": 481095}, {"config_name": "hn-qa15", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 170358, "num_examples": 250}, {"name": "test", "num_bytes": 170259, "num_examples": 250}], "download_size": 15719851, "dataset_size": 340617}, {"config_name": "hn-qa16", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 523093, "num_examples": 1000}, {"name": "test", "num_bytes": 523032, "num_examples": 1000}], "download_size": 15719851, "dataset_size": 1046125}, {"config_name": "hn-qa17", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 103878, "num_examples": 125}, {"name": "test", "num_bytes": 104061, "num_examples": 125}], "download_size": 15719851, "dataset_size": 207939}, {"config_name": "hn-qa18", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 173056, "num_examples": 198}, {"name": "test", "num_bytes": 176824, "num_examples": 198}], "download_size": 15719851, "dataset_size": 349880}, {"config_name": "hn-qa19", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 470225, "num_examples": 1000}, {"name": "test", "num_bytes": 470479, "num_examples": 1000}], "download_size": 15719851, "dataset_size": 940704}, {"config_name": "hn-qa20", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 115021, "num_examples": 93}, {"name": "test", "num_bytes": 115088, "num_examples": 94}], "download_size": 15719851, "dataset_size": 230109}, {"config_name": "en-10k-qa1", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1654288, "num_examples": 2000}, {"name": "test", "num_bytes": 165517, "num_examples": 200}], "download_size": 15719851, "dataset_size": 1819805}, {"config_name": "en-10k-qa2", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 3062580, "num_examples": 2000}, {"name": "test", "num_bytes": 306631, "num_examples": 200}], "download_size": 15719851, "dataset_size": 3369211}, {"config_name": "en-10k-qa3", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 8921215, "num_examples": 2000}, {"name": "test", "num_bytes": 883187, "num_examples": 200}], "download_size": 15719851, "dataset_size": 9804402}, {"config_name": "en-10k-qa4", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2055105, "num_examples": 10000}, {"name": "test", "num_bytes": 205434, "num_examples": 1000}], "download_size": 15719851, "dataset_size": 2260539}, {"config_name": "en-10k-qa5", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 3592157, "num_examples": 2000}, {"name": "test", "num_bytes": 350457, "num_examples": 200}], "download_size": 15719851, "dataset_size": 3942614}, {"config_name": "en-10k-qa6", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1726716, "num_examples": 2000}, {"name": "test", "num_bytes": 172249, "num_examples": 200}], "download_size": 15719851, "dataset_size": 1898965}, {"config_name": "en-10k-qa7", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2228087, "num_examples": 2000}, {"name": "test", "num_bytes": 215512, "num_examples": 200}], "download_size": 15719851, "dataset_size": 2443599}, {"config_name": "en-10k-qa8", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2141880, "num_examples": 2000}, {"name": "test", "num_bytes": 216244, "num_examples": 200}], "download_size": 15719851, "dataset_size": 2358124}, {"config_name": "en-10k-qa9", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1681213, "num_examples": 2000}, {"name": "test", "num_bytes": 168248, "num_examples": 200}], "download_size": 15719851, "dataset_size": 1849461}, {"config_name": "en-10k-qa10", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1707675, "num_examples": 2000}, {"name": "test", "num_bytes": 170672, "num_examples": 200}], "download_size": 15719851, "dataset_size": 1878347}, {"config_name": "en-10k-qa11", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1786179, "num_examples": 2000}, {"name": "test", "num_bytes": 178840, "num_examples": 200}], "download_size": 15719851, "dataset_size": 1965019}, {"config_name": "en-10k-qa12", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1854745, "num_examples": 2000}, {"name": "test", "num_bytes": 185529, "num_examples": 200}], "download_size": 15719851, "dataset_size": 2040274}, {"config_name": "en-10k-qa13", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1903149, "num_examples": 2000}, {"name": "test", "num_bytes": 190484, "num_examples": 200}], "download_size": 15719851, "dataset_size": 2093633}, {"config_name": "en-10k-qa14", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2321511, "num_examples": 2000}, {"name": "test", "num_bytes": 233204, "num_examples": 200}], "download_size": 15719851, "dataset_size": 2554715}, {"config_name": "en-10k-qa15", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1637398, "num_examples": 2500}, {"name": "test", "num_bytes": 163809, "num_examples": 250}], "download_size": 15719851, "dataset_size": 1801207}, {"config_name": "en-10k-qa16", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 4562844, "num_examples": 10000}, {"name": "test", "num_bytes": 456248, "num_examples": 1000}], "download_size": 15719851, "dataset_size": 5019092}, {"config_name": "en-10k-qa17", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1034333, "num_examples": 1250}, {"name": "test", "num_bytes": 103618, "num_examples": 125}], "download_size": 15719851, "dataset_size": 1137951}, {"config_name": "en-10k-qa18", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1641650, "num_examples": 1978}, {"name": "test", "num_bytes": 161266, "num_examples": 199}], "download_size": 15719851, "dataset_size": 1802916}, {"config_name": "en-10k-qa19", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 4045086, "num_examples": 10000}, {"name": "test", "num_bytes": 404489, "num_examples": 1000}], "download_size": 15719851, "dataset_size": 4449575}, {"config_name": "en-10k-qa20", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1157351, "num_examples": 933}, {"name": "test", "num_bytes": 115863, "num_examples": 93}], "download_size": 15719851, "dataset_size": 1273214}, {"config_name": "en-valid-qa1", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 148887, "num_examples": 180}, {"name": "test", "num_bytes": 165517, "num_examples": 200}, {"name": "validation", "num_bytes": 16539, "num_examples": 20}], "download_size": 15719851, "dataset_size": 330943}, {"config_name": "en-valid-qa2", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 275106, "num_examples": 180}, {"name": "test", "num_bytes": 306631, "num_examples": 200}, {"name": "validation", "num_bytes": 27822, "num_examples": 20}], "download_size": 15719851, "dataset_size": 609559}, {"config_name": "en-valid-qa3", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 794565, "num_examples": 180}, {"name": "test", "num_bytes": 883187, "num_examples": 200}, {"name": "validation", "num_bytes": 93231, "num_examples": 20}], "download_size": 15719851, "dataset_size": 1770983}, {"config_name": "en-valid-qa4", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 184992, "num_examples": 900}, {"name": "test", "num_bytes": 205434, "num_examples": 1000}, {"name": "validation", "num_bytes": 20558, "num_examples": 100}], "download_size": 15719851, "dataset_size": 410984}, {"config_name": "en-valid-qa5", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 305472, "num_examples": 180}, {"name": "test", "num_bytes": 350457, "num_examples": 200}, {"name": "validation", "num_bytes": 31917, "num_examples": 20}], "download_size": 15719851, "dataset_size": 687846}, {"config_name": "en-valid-qa6", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 155845, "num_examples": 180}, {"name": "test", "num_bytes": 172249, "num_examples": 200}, {"name": "validation", "num_bytes": 17248, "num_examples": 20}], "download_size": 15719851, "dataset_size": 345342}, {"config_name": "en-valid-qa7", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 203642, "num_examples": 180}, {"name": "test", "num_bytes": 215512, "num_examples": 200}, {"name": "validation", "num_bytes": 21176, "num_examples": 20}], "download_size": 15719851, "dataset_size": 440330}, {"config_name": "en-valid-qa8", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 191599, "num_examples": 180}, {"name": "test", "num_bytes": 216244, "num_examples": 200}, {"name": "validation", "num_bytes": 20958, "num_examples": 20}], "download_size": 15719851, "dataset_size": 428801}, {"config_name": "en-valid-qa9", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 151458, "num_examples": 180}, {"name": "test", "num_bytes": 168248, "num_examples": 200}, {"name": "validation", "num_bytes": 16932, "num_examples": 20}], "download_size": 15719851, "dataset_size": 336638}, {"config_name": "en-valid-qa10", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 153240, "num_examples": 180}, {"name": "test", "num_bytes": 170672, "num_examples": 200}, {"name": "validation", "num_bytes": 17057, "num_examples": 20}], "download_size": 15719851, "dataset_size": 340969}, {"config_name": "en-valid-qa11", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 160701, "num_examples": 180}, {"name": "test", "num_bytes": 178840, "num_examples": 200}, {"name": "validation", "num_bytes": 17899, "num_examples": 20}], "download_size": 15719851, "dataset_size": 357440}, {"config_name": "en-valid-qa12", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 167031, "num_examples": 180}, {"name": "test", "num_bytes": 185529, "num_examples": 200}, {"name": "validation", "num_bytes": 18609, "num_examples": 20}], "download_size": 15719851, "dataset_size": 371169}, {"config_name": "en-valid-qa13", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 171527, "num_examples": 180}, {"name": "test", "num_bytes": 190484, "num_examples": 200}, {"name": "validation", "num_bytes": 19069, "num_examples": 20}], "download_size": 15719851, "dataset_size": 381080}, {"config_name": "en-valid-qa14", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 210650, "num_examples": 180}, {"name": "test", "num_bytes": 233204, "num_examples": 200}, {"name": "validation", "num_bytes": 23745, "num_examples": 20}], "download_size": 15719851, "dataset_size": 467599}, {"config_name": "en-valid-qa15", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 147356, "num_examples": 225}, {"name": "test", "num_bytes": 163809, "num_examples": 250}, {"name": "validation", "num_bytes": 16412, "num_examples": 25}], "download_size": 15719851, "dataset_size": 327577}, {"config_name": "en-valid-qa16", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 410711, "num_examples": 900}, {"name": "test", "num_bytes": 456248, "num_examples": 1000}, {"name": "validation", "num_bytes": 45703, "num_examples": 100}], "download_size": 15719851, "dataset_size": 912662}, {"config_name": "en-valid-qa17", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 93596, "num_examples": 113}, {"name": "test", "num_bytes": 103618, "num_examples": 125}, {"name": "validation", "num_bytes": 10080, "num_examples": 12}], "download_size": 15719851, "dataset_size": 207294}, {"config_name": "en-valid-qa18", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 147338, "num_examples": 179}, {"name": "test", "num_bytes": 161266, "num_examples": 199}, {"name": "validation", "num_bytes": 15577, "num_examples": 19}], "download_size": 15719851, "dataset_size": 324181}, {"config_name": "en-valid-qa19", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 364090, "num_examples": 900}, {"name": "test", "num_bytes": 404489, "num_examples": 1000}, {"name": "validation", "num_bytes": 40486, "num_examples": 100}], "download_size": 15719851, "dataset_size": 809065}, {"config_name": "en-valid-qa20", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 104706, "num_examples": 85}, {"name": "test", "num_bytes": 115863, "num_examples": 93}, {"name": "validation", "num_bytes": 11146, "num_examples": 9}], "download_size": 15719851, "dataset_size": 231715}, {"config_name": "en-valid-10k-qa1", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1488751, "num_examples": 1800}, {"name": "test", "num_bytes": 165517, "num_examples": 200}, {"name": "validation", "num_bytes": 165577, "num_examples": 200}], "download_size": 15719851, "dataset_size": 1819845}, {"config_name": "en-valid-10k-qa2", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2746462, "num_examples": 1800}, {"name": "test", "num_bytes": 306631, "num_examples": 200}, {"name": "validation", "num_bytes": 316158, "num_examples": 200}], "download_size": 15719851, "dataset_size": 3369251}, {"config_name": "en-valid-10k-qa3", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 8021847, "num_examples": 1800}, {"name": "test", "num_bytes": 883187, "num_examples": 200}, {"name": "validation", "num_bytes": 899408, "num_examples": 200}], "download_size": 15719851, "dataset_size": 9804442}, {"config_name": "en-valid-10k-qa4", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1849497, "num_examples": 9000}, {"name": "test", "num_bytes": 205434, "num_examples": 1000}, {"name": "validation", "num_bytes": 205648, "num_examples": 1000}], "download_size": 15719851, "dataset_size": 2260579}, {"config_name": "en-valid-10k-qa5", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 3234186, "num_examples": 1800}, {"name": "test", "num_bytes": 350457, "num_examples": 200}, {"name": "validation", "num_bytes": 358011, "num_examples": 200}], "download_size": 15719851, "dataset_size": 3942654}, {"config_name": "en-valid-10k-qa6", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1553957, "num_examples": 1800}, {"name": "test", "num_bytes": 172249, "num_examples": 200}, {"name": "validation", "num_bytes": 172799, "num_examples": 200}], "download_size": 15719851, "dataset_size": 1899005}, {"config_name": "en-valid-10k-qa7", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2003820, "num_examples": 1800}, {"name": "test", "num_bytes": 215512, "num_examples": 200}, {"name": "validation", "num_bytes": 224307, "num_examples": 200}], "download_size": 15719851, "dataset_size": 2443639}, {"config_name": "en-valid-10k-qa8", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1926339, "num_examples": 1800}, {"name": "test", "num_bytes": 216244, "num_examples": 200}, {"name": "validation", "num_bytes": 215581, "num_examples": 200}], "download_size": 15719851, "dataset_size": 2358164}, {"config_name": "en-valid-10k-qa9", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1512917, "num_examples": 1800}, {"name": "test", "num_bytes": 168248, "num_examples": 200}, {"name": "validation", "num_bytes": 168336, "num_examples": 200}], "download_size": 15719851, "dataset_size": 1849501}, {"config_name": "en-valid-10k-qa10", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1536416, "num_examples": 1800}, {"name": "test", "num_bytes": 170672, "num_examples": 200}, {"name": "validation", "num_bytes": 171299, "num_examples": 200}], "download_size": 15719851, "dataset_size": 1878387}, {"config_name": "en-valid-10k-qa11", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1607505, "num_examples": 1800}, {"name": "test", "num_bytes": 178840, "num_examples": 200}, {"name": "validation", "num_bytes": 178714, "num_examples": 200}], "download_size": 15719851, "dataset_size": 1965059}, {"config_name": "en-valid-10k-qa12", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1669198, "num_examples": 1800}, {"name": "test", "num_bytes": 185529, "num_examples": 200}, {"name": "validation", "num_bytes": 185587, "num_examples": 200}], "download_size": 15719851, "dataset_size": 2040314}, {"config_name": "en-valid-10k-qa13", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1712558, "num_examples": 1800}, {"name": "test", "num_bytes": 190484, "num_examples": 200}, {"name": "validation", "num_bytes": 190631, "num_examples": 200}], "download_size": 15719851, "dataset_size": 2093673}, {"config_name": "en-valid-10k-qa14", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2091491, "num_examples": 1800}, {"name": "test", "num_bytes": 233204, "num_examples": 200}, {"name": "validation", "num_bytes": 230060, "num_examples": 200}], "download_size": 15719851, "dataset_size": 2554755}, {"config_name": "en-valid-10k-qa15", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1473615, "num_examples": 2250}, {"name": "test", "num_bytes": 163809, "num_examples": 250}, {"name": "validation", "num_bytes": 163823, "num_examples": 250}], "download_size": 15719851, "dataset_size": 1801247}, {"config_name": "en-valid-10k-qa16", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 4106444, "num_examples": 9000}, {"name": "test", "num_bytes": 456248, "num_examples": 1000}, {"name": "validation", "num_bytes": 456440, "num_examples": 1000}], "download_size": 15719851, "dataset_size": 5019132}, {"config_name": "en-valid-10k-qa17", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 930465, "num_examples": 1125}, {"name": "test", "num_bytes": 103618, "num_examples": 125}, {"name": "validation", "num_bytes": 103908, "num_examples": 125}], "download_size": 15719851, "dataset_size": 1137991}, {"config_name": "en-valid-10k-qa18", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1477467, "num_examples": 1781}, {"name": "test", "num_bytes": 161266, "num_examples": 199}, {"name": "validation", "num_bytes": 164223, "num_examples": 197}], "download_size": 15719851, "dataset_size": 1802956}, {"config_name": "en-valid-10k-qa19", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 3640527, "num_examples": 9000}, {"name": "test", "num_bytes": 404489, "num_examples": 1000}, {"name": "validation", "num_bytes": 404599, "num_examples": 1000}], "download_size": 15719851, "dataset_size": 4449615}, {"config_name": "en-valid-10k-qa20", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1041856, "num_examples": 840}, {"name": "test", "num_bytes": 115863, "num_examples": 93}, {"name": "validation", "num_bytes": 115535, "num_examples": 93}], "download_size": 15719851, "dataset_size": 1273254}, {"config_name": "hn-10k-qa1", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1684003, "num_examples": 2000}, {"name": "test", "num_bytes": 168572, "num_examples": 200}], "download_size": 15719851, "dataset_size": 1852575}, {"config_name": "hn-10k-qa2", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2934642, "num_examples": 2000}, {"name": "test", "num_bytes": 288429, "num_examples": 200}], "download_size": 15719851, "dataset_size": 3223071}, {"config_name": "hn-10k-qa3", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 8440008, "num_examples": 1667}, {"name": "test", "num_bytes": 808460, "num_examples": 167}], "download_size": 15719851, "dataset_size": 9248468}, {"config_name": "hn-10k-qa4", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2312075, "num_examples": 10000}, {"name": "test", "num_bytes": 231230, "num_examples": 1000}], "download_size": 15719851, "dataset_size": 2543305}, {"config_name": "hn-10k-qa5", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 3301271, "num_examples": 2000}, {"name": "test", "num_bytes": 315396, "num_examples": 200}], "download_size": 15719851, "dataset_size": 3616667}, {"config_name": "hn-10k-qa6", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1703863, "num_examples": 2000}, {"name": "test", "num_bytes": 171360, "num_examples": 200}], "download_size": 15719851, "dataset_size": 1875223}, {"config_name": "hn-10k-qa7", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2091460, "num_examples": 2000}, {"name": "test", "num_bytes": 208080, "num_examples": 200}], "download_size": 15719851, "dataset_size": 2299540}, {"config_name": "hn-10k-qa8", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2178277, "num_examples": 2000}, {"name": "test", "num_bytes": 222232, "num_examples": 200}], "download_size": 15719851, "dataset_size": 2400509}, {"config_name": "hn-10k-qa9", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1874753, "num_examples": 2000}, {"name": "test", "num_bytes": 187341, "num_examples": 200}], "download_size": 15719851, "dataset_size": 2062094}, {"config_name": "hn-10k-qa10", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1830698, "num_examples": 2000}, {"name": "test", "num_bytes": 182932, "num_examples": 200}], "download_size": 15719851, "dataset_size": 2013630}, {"config_name": "hn-10k-qa11", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1798057, "num_examples": 2000}, {"name": "test", "num_bytes": 180461, "num_examples": 200}], "download_size": 15719851, "dataset_size": 1978518}, {"config_name": "hn-10k-qa12", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1879776, "num_examples": 2000}, {"name": "test", "num_bytes": 187954, "num_examples": 200}], "download_size": 15719851, "dataset_size": 2067730}, {"config_name": "hn-10k-qa13", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1915482, "num_examples": 1250}, {"name": "test", "num_bytes": 191747, "num_examples": 125}], "download_size": 15719851, "dataset_size": 2107229}, {"config_name": "hn-10k-qa14", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2392212, "num_examples": 2000}, {"name": "test", "num_bytes": 240436, "num_examples": 200}], "download_size": 15719851, "dataset_size": 2632648}, {"config_name": "hn-10k-qa15", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1702512, "num_examples": 2500}, {"name": "test", "num_bytes": 170259, "num_examples": 250}], "download_size": 15719851, "dataset_size": 1872771}, {"config_name": "hn-10k-qa16", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 5229983, "num_examples": 10000}, {"name": "test", "num_bytes": 523032, "num_examples": 1000}], "download_size": 15719851, "dataset_size": 5753015}, {"config_name": "hn-10k-qa17", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1039729, "num_examples": 1250}, {"name": "test", "num_bytes": 104061, "num_examples": 125}], "download_size": 15719851, "dataset_size": 1143790}, {"config_name": "hn-10k-qa18", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1738458, "num_examples": 1977}, {"name": "test", "num_bytes": 176824, "num_examples": 198}], "download_size": 15719851, "dataset_size": 1915282}, {"config_name": "hn-10k-qa19", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 4702044, "num_examples": 10000}, {"name": "test", "num_bytes": 470479, "num_examples": 1000}], "download_size": 15719851, "dataset_size": 5172523}, {"config_name": "hn-10k-qa20", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1147599, "num_examples": 934}, {"name": "test", "num_bytes": 115088, "num_examples": 94}], "download_size": 15719851, "dataset_size": 1262687}, {"config_name": "shuffled-qa1", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 165386, "num_examples": 200}, {"name": "test", "num_bytes": 165517, "num_examples": 200}], "download_size": 15719851, "dataset_size": 330903}, {"config_name": "shuffled-qa2", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 302888, "num_examples": 200}, {"name": "test", "num_bytes": 306631, "num_examples": 200}], "download_size": 15719851, "dataset_size": 609519}, {"config_name": "shuffled-qa3", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 887756, "num_examples": 200}, {"name": "test", "num_bytes": 883187, "num_examples": 200}], "download_size": 15719851, "dataset_size": 1770943}, {"config_name": "shuffled-qa4", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 205510, "num_examples": 1000}, {"name": "test", "num_bytes": 205434, "num_examples": 1000}], "download_size": 15719851, "dataset_size": 410944}, {"config_name": "shuffled-qa5", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 337349, "num_examples": 200}, {"name": "test", "num_bytes": 350457, "num_examples": 200}], "download_size": 15719851, "dataset_size": 687806}, {"config_name": "shuffled-qa6", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 173053, "num_examples": 200}, {"name": "test", "num_bytes": 172249, "num_examples": 200}], "download_size": 15719851, "dataset_size": 345302}, {"config_name": "shuffled-qa7", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 224778, "num_examples": 200}, {"name": "test", "num_bytes": 215512, "num_examples": 200}], "download_size": 15719851, "dataset_size": 440290}, {"config_name": "shuffled-qa8", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 212517, "num_examples": 200}, {"name": "test", "num_bytes": 216244, "num_examples": 200}], "download_size": 15719851, "dataset_size": 428761}, {"config_name": "shuffled-qa9", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 168350, "num_examples": 200}, {"name": "test", "num_bytes": 168248, "num_examples": 200}], "download_size": 15719851, "dataset_size": 336598}, {"config_name": "shuffled-qa10", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 170257, "num_examples": 200}, {"name": "test", "num_bytes": 170672, "num_examples": 200}], "download_size": 15719851, "dataset_size": 340929}, {"config_name": "shuffled-qa11", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 178083, "num_examples": 200}, {"name": "test", "num_bytes": 178313, "num_examples": 200}], "download_size": 15719851, "dataset_size": 356396}, {"config_name": "shuffled-qa12", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 185600, "num_examples": 200}, {"name": "test", "num_bytes": 185529, "num_examples": 200}], "download_size": 15719851, "dataset_size": 371129}, {"config_name": "shuffled-qa13", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 190556, "num_examples": 200}, {"name": "test", "num_bytes": 190484, "num_examples": 200}], "download_size": 15719851, "dataset_size": 381040}, {"config_name": "shuffled-qa14", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 234355, "num_examples": 200}, {"name": "test", "num_bytes": 233204, "num_examples": 200}], "download_size": 15719851, "dataset_size": 467559}, {"config_name": "shuffled-qa15", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 163728, "num_examples": 250}, {"name": "test", "num_bytes": 163809, "num_examples": 250}], "download_size": 15719851, "dataset_size": 327537}, {"config_name": "shuffled-qa16", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 456374, "num_examples": 1000}, {"name": "test", "num_bytes": 456248, "num_examples": 1000}], "download_size": 15719851, "dataset_size": 912622}, {"config_name": "shuffled-qa17", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 103636, "num_examples": 125}, {"name": "test", "num_bytes": 103618, "num_examples": 125}], "download_size": 15719851, "dataset_size": 207254}, {"config_name": "shuffled-qa18", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 162875, "num_examples": 198}, {"name": "test", "num_bytes": 161266, "num_examples": 199}], "download_size": 15719851, "dataset_size": 324141}, {"config_name": "shuffled-qa19", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 404536, "num_examples": 1000}, {"name": "test", "num_bytes": 404489, "num_examples": 1000}], "download_size": 15719851, "dataset_size": 809025}, {"config_name": "shuffled-qa20", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 115812, "num_examples": 94}, {"name": "test", "num_bytes": 115863, "num_examples": 93}], "download_size": 15719851, "dataset_size": 231675}, {"config_name": "shuffled-10k-qa1", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1654288, "num_examples": 2000}, {"name": "test", "num_bytes": 165517, "num_examples": 200}], "download_size": 15719851, "dataset_size": 1819805}, {"config_name": "shuffled-10k-qa2", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 3062580, "num_examples": 2000}, {"name": "test", "num_bytes": 306631, "num_examples": 200}], "download_size": 15719851, "dataset_size": 3369211}, {"config_name": "shuffled-10k-qa3", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 8921215, "num_examples": 2000}, {"name": "test", "num_bytes": 883187, "num_examples": 200}], "download_size": 15719851, "dataset_size": 9804402}, {"config_name": "shuffled-10k-qa4", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2055105, "num_examples": 10000}, {"name": "test", "num_bytes": 205434, "num_examples": 1000}], "download_size": 15719851, "dataset_size": 2260539}, {"config_name": "shuffled-10k-qa5", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 3592157, "num_examples": 2000}, {"name": "test", "num_bytes": 350457, "num_examples": 200}], "download_size": 15719851, "dataset_size": 3942614}, {"config_name": "shuffled-10k-qa6", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1726716, "num_examples": 2000}, {"name": "test", "num_bytes": 172249, "num_examples": 200}], "download_size": 15719851, "dataset_size": 1898965}, {"config_name": "shuffled-10k-qa7", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2228087, "num_examples": 2000}, {"name": "test", "num_bytes": 215512, "num_examples": 200}], "download_size": 15719851, "dataset_size": 2443599}, {"config_name": "shuffled-10k-qa8", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2141880, "num_examples": 2000}, {"name": "test", "num_bytes": 216244, "num_examples": 200}], "download_size": 15719851, "dataset_size": 2358124}, {"config_name": "shuffled-10k-qa9", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1681213, "num_examples": 2000}, {"name": "test", "num_bytes": 168248, "num_examples": 200}], "download_size": 15719851, "dataset_size": 1849461}, {"config_name": "shuffled-10k-qa10", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1707675, "num_examples": 2000}, {"name": "test", "num_bytes": 170672, "num_examples": 200}], "download_size": 15719851, "dataset_size": 1878347}, {"config_name": "shuffled-10k-qa11", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1781176, "num_examples": 2000}, {"name": "test", "num_bytes": 178313, "num_examples": 200}], "download_size": 15719851, "dataset_size": 1959489}, {"config_name": "shuffled-10k-qa12", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1854745, "num_examples": 2000}, {"name": "test", "num_bytes": 185529, "num_examples": 200}], "download_size": 15719851, "dataset_size": 2040274}, {"config_name": "shuffled-10k-qa13", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1903149, "num_examples": 2000}, {"name": "test", "num_bytes": 190484, "num_examples": 200}], "download_size": 15719851, "dataset_size": 2093633}, {"config_name": "shuffled-10k-qa14", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2321511, "num_examples": 2000}, {"name": "test", "num_bytes": 233204, "num_examples": 200}], "download_size": 15719851, "dataset_size": 2554715}, {"config_name": "shuffled-10k-qa15", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1637398, "num_examples": 2500}, {"name": "test", "num_bytes": 163809, "num_examples": 250}], "download_size": 15719851, "dataset_size": 1801207}, {"config_name": "shuffled-10k-qa16", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 4562844, "num_examples": 10000}, {"name": "test", "num_bytes": 456248, "num_examples": 1000}], "download_size": 15719851, "dataset_size": 5019092}, {"config_name": "shuffled-10k-qa17", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1034333, "num_examples": 1250}, {"name": "test", "num_bytes": 103618, "num_examples": 125}], "download_size": 15719851, "dataset_size": 1137951}, {"config_name": "shuffled-10k-qa18", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1641650, "num_examples": 1978}, {"name": "test", "num_bytes": 161266, "num_examples": 199}], "download_size": 15719851, "dataset_size": 1802916}, {"config_name": "shuffled-10k-qa19", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 4045086, "num_examples": 10000}, {"name": "test", "num_bytes": 404489, "num_examples": 1000}], "download_size": 15719851, "dataset_size": 4449575}, {"config_name": "shuffled-10k-qa20", "features": [{"name": "story", "sequence": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "context", "1": "question"}}}}, {"name": "text", "dtype": "string"}, {"name": "supporting_ids", "sequence": "string"}, {"name": "answer", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1157351, "num_examples": 933}, {"name": "test", "num_bytes": 115863, "num_examples": 93}], "download_size": 15719851, "dataset_size": 1273214}]}
2023-01-25T14:26:58+00:00
[ "1502.05698", "1511.06931" ]
[ "en" ]
TAGS #task_categories-question-answering #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-10K<n<100K #size_categories-1K<n<10K #size_categories-n<1K #source_datasets-original #language-English #license-cc-by-3.0 #chained-qa #arxiv-1502.05698 #arxiv-1511.06931 #region-us
Dataset Card for bAbi QA ======================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage:The bAbI project * Repository: * Paper: arXiv Paper * Leaderboard: * Point of Contact: ### Dataset Summary The (20) QA bAbI tasks are a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. The aim is to classify these tasks into skill sets,so that researchers can identify (and then rectify) the failings of their systems. ### Supported Tasks and Leaderboards The dataset supports a set of 20 proxy story-based question answering tasks for various "types" in English and Hindi. The tasks are: The "types" are are: * 'en' + the tasks in English, readable by humans. * 'hn' + the tasks in Hindi, readable by humans. * 'shuffled' + the same tasks with shuffled letters so they are not readable by humans, and for existing parsers and taggers cannot be used in a straight-forward fashion to leverage extra resources-- in this case the learner is more forced to rely on the given training data. This mimics a learner being first presented a language and having to learn from scratch. * 'en-10k', 'shuffled-10k' and 'hn-10k' + the same tasks in the three formats, but with 10,000 training examples, rather than 1000 training examples. * 'en-valid' and 'en-valid-10k' + are the same as 'en' and 'en10k' except the train sets have been conveniently split into train and valid portions (90% and 10% split). To get a particular dataset, use 'load\_dataset('babi\_qa',type=f'{type}',task\_no=f'{task\_no}')' where 'type' is one of the types, and 'task\_no' is one of the task numbers. For example, 'load\_dataset('babi\_qa', type='en', task\_no='qa1')'. ### Languages Dataset Structure ----------------- ### Data Instances An instance from the 'en-qa1' config's 'train' split: ### Data Fields * 'story': a dictionary feature containing: + 'id': a 'string' feature, which denotes the line number in the example. + 'type': a classification label, with possible values including 'context', 'question', denoting whether the text is context or a question. + 'text': a 'string' feature the text present, whether it is a question or context. + 'supporting\_ids': a 'list' of 'string' features containing the line numbers of the lines in the example which support the answer. + 'answer': a 'string' feature containing the answer to the question, or an empty string if the 'type's is not 'question'. ### Data Splits The splits and corresponding sizes are: Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization Code to generate tasks is available on github #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators Jesse Dodge and Andreea Gane and Xiang Zhang and Antoine Bordes and Sumit Chopra and Alexander Miller and Arthur Szlam and Jason Weston, at Facebook Research. ### Licensing Information ### Contributions Thanks to @gchhablani for adding this dataset.
[ "### Dataset Summary\n\n\nThe (20) QA bAbI tasks are a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. The aim is to classify these tasks into skill sets,so that researchers can identify (and then rectify) the failings of their systems.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a set of 20 proxy story-based question answering tasks for various \"types\" in English and Hindi. The tasks are:\n\n\n\nThe \"types\" are are:\n\n\n* 'en'\n\n\n\t+ the tasks in English, readable by humans.\n* 'hn'\n\n\n\t+ the tasks in Hindi, readable by humans.\n* 'shuffled'\n\n\n\t+ the same tasks with shuffled letters so they are not readable by humans, and for existing parsers and taggers cannot be used in a straight-forward fashion to leverage extra resources-- in this case the learner is more forced to rely on the given training data. This mimics a learner being first presented a language and having to learn from scratch.\n* 'en-10k', 'shuffled-10k' and 'hn-10k'\n\n\n\t+ the same tasks in the three formats, but with 10,000 training examples, rather than 1000 training examples.\n* 'en-valid' and 'en-valid-10k'\n\n\n\t+ are the same as 'en' and 'en10k' except the train sets have been conveniently split into train and valid portions (90% and 10% split).\n\n\nTo get a particular dataset, use 'load\\_dataset('babi\\_qa',type=f'{type}',task\\_no=f'{task\\_no}')' where 'type' is one of the types, and 'task\\_no' is one of the task numbers. For example, 'load\\_dataset('babi\\_qa', type='en', task\\_no='qa1')'.", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn instance from the 'en-qa1' config's 'train' split:", "### Data Fields\n\n\n* 'story': a dictionary feature containing:\n\t+ 'id': a 'string' feature, which denotes the line number in the example.\n\t+ 'type': a classification label, with possible values including 'context', 'question', denoting whether the text is context or a question.\n\t+ 'text': a 'string' feature the text present, whether it is a question or context.\n\t+ 'supporting\\_ids': a 'list' of 'string' features containing the line numbers of the lines in the example which support the answer.\n\t+ 'answer': a 'string' feature containing the answer to the question, or an empty string if the 'type's is not 'question'.", "### Data Splits\n\n\nThe splits and corresponding sizes are:\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nCode to generate tasks is available on github", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nJesse Dodge and Andreea Gane and Xiang Zhang and Antoine Bordes and Sumit Chopra and Alexander Miller and Arthur Szlam and Jason Weston, at Facebook Research.", "### Licensing Information", "### Contributions\n\n\nThanks to @gchhablani for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-10K<n<100K #size_categories-1K<n<10K #size_categories-n<1K #source_datasets-original #language-English #license-cc-by-3.0 #chained-qa #arxiv-1502.05698 #arxiv-1511.06931 #region-us \n", "### Dataset Summary\n\n\nThe (20) QA bAbI tasks are a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. The aim is to classify these tasks into skill sets,so that researchers can identify (and then rectify) the failings of their systems.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports a set of 20 proxy story-based question answering tasks for various \"types\" in English and Hindi. The tasks are:\n\n\n\nThe \"types\" are are:\n\n\n* 'en'\n\n\n\t+ the tasks in English, readable by humans.\n* 'hn'\n\n\n\t+ the tasks in Hindi, readable by humans.\n* 'shuffled'\n\n\n\t+ the same tasks with shuffled letters so they are not readable by humans, and for existing parsers and taggers cannot be used in a straight-forward fashion to leverage extra resources-- in this case the learner is more forced to rely on the given training data. This mimics a learner being first presented a language and having to learn from scratch.\n* 'en-10k', 'shuffled-10k' and 'hn-10k'\n\n\n\t+ the same tasks in the three formats, but with 10,000 training examples, rather than 1000 training examples.\n* 'en-valid' and 'en-valid-10k'\n\n\n\t+ are the same as 'en' and 'en10k' except the train sets have been conveniently split into train and valid portions (90% and 10% split).\n\n\nTo get a particular dataset, use 'load\\_dataset('babi\\_qa',type=f'{type}',task\\_no=f'{task\\_no}')' where 'type' is one of the types, and 'task\\_no' is one of the task numbers. For example, 'load\\_dataset('babi\\_qa', type='en', task\\_no='qa1')'.", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn instance from the 'en-qa1' config's 'train' split:", "### Data Fields\n\n\n* 'story': a dictionary feature containing:\n\t+ 'id': a 'string' feature, which denotes the line number in the example.\n\t+ 'type': a classification label, with possible values including 'context', 'question', denoting whether the text is context or a question.\n\t+ 'text': a 'string' feature the text present, whether it is a question or context.\n\t+ 'supporting\\_ids': a 'list' of 'string' features containing the line numbers of the lines in the example which support the answer.\n\t+ 'answer': a 'string' feature containing the answer to the question, or an empty string if the 'type's is not 'question'.", "### Data Splits\n\n\nThe splits and corresponding sizes are:\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nCode to generate tasks is available on github", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nJesse Dodge and Andreea Gane and Xiang Zhang and Antoine Bordes and Sumit Chopra and Alexander Miller and Arthur Szlam and Jason Weston, at Facebook Research.", "### Licensing Information", "### Contributions\n\n\nThanks to @gchhablani for adding this dataset." ]
[ 127, 130, 382, 11, 26, 168, 21, 7, 4, 20, 10, 5, 5, 9, 18, 7, 8, 14, 44, 6, 18 ]
[ "passage: TAGS\n#task_categories-question-answering #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-10K<n<100K #size_categories-1K<n<10K #size_categories-n<1K #source_datasets-original #language-English #license-cc-by-3.0 #chained-qa #arxiv-1502.05698 #arxiv-1511.06931 #region-us \n### Dataset Summary\n\n\nThe (20) QA bAbI tasks are a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. The aim is to classify these tasks into skill sets,so that researchers can identify (and then rectify) the failings of their systems." ]
f54121560de48f2852f90be299010d1d6dc612ec
# Dataset Card for BANKING77 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Github](https://github.com/PolyAI-LDN/task-specific-datasets) - **Repository:** [Github](https://github.com/PolyAI-LDN/task-specific-datasets) - **Paper:** [ArXiv](https://arxiv.org/abs/2003.04807) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"> <p><b>Deprecated:</b> Dataset "banking77" is deprecated and will be deleted. Use "<a href="https://huggingface.co/datasets/PolyAI/banking77">PolyAI/banking77</a>" instead.</p> </div> Dataset composed of online banking queries annotated with their corresponding intents. BANKING77 dataset provides a very fine-grained set of intents in a banking domain. It comprises 13,083 customer service queries labeled with 77 intents. It focuses on fine-grained single-domain intent detection. ### Supported Tasks and Leaderboards Intent classification, intent detection ### Languages English ## Dataset Structure ### Data Instances An example of 'train' looks as follows: ``` { 'label': 11, # integer label corresponding to "card_arrival" intent 'text': 'I am still waiting on my card?' } ``` ### Data Fields - `text`: a string feature. - `label`: One of classification labels (0-76) corresponding to unique intents. Intent names are mapped to `label` in the following way: | label | intent (category) | |---:|:-------------------------------------------------| | 0 | activate_my_card | | 1 | age_limit | | 2 | apple_pay_or_google_pay | | 3 | atm_support | | 4 | automatic_top_up | | 5 | balance_not_updated_after_bank_transfer | | 6 | balance_not_updated_after_cheque_or_cash_deposit | | 7 | beneficiary_not_allowed | | 8 | cancel_transfer | | 9 | card_about_to_expire | | 10 | card_acceptance | | 11 | card_arrival | | 12 | card_delivery_estimate | | 13 | card_linking | | 14 | card_not_working | | 15 | card_payment_fee_charged | | 16 | card_payment_not_recognised | | 17 | card_payment_wrong_exchange_rate | | 18 | card_swallowed | | 19 | cash_withdrawal_charge | | 20 | cash_withdrawal_not_recognised | | 21 | change_pin | | 22 | compromised_card | | 23 | contactless_not_working | | 24 | country_support | | 25 | declined_card_payment | | 26 | declined_cash_withdrawal | | 27 | declined_transfer | | 28 | direct_debit_payment_not_recognised | | 29 | disposable_card_limits | | 30 | edit_personal_details | | 31 | exchange_charge | | 32 | exchange_rate | | 33 | exchange_via_app | | 34 | extra_charge_on_statement | | 35 | failed_transfer | | 36 | fiat_currency_support | | 37 | get_disposable_virtual_card | | 38 | get_physical_card | | 39 | getting_spare_card | | 40 | getting_virtual_card | | 41 | lost_or_stolen_card | | 42 | lost_or_stolen_phone | | 43 | order_physical_card | | 44 | passcode_forgotten | | 45 | pending_card_payment | | 46 | pending_cash_withdrawal | | 47 | pending_top_up | | 48 | pending_transfer | | 49 | pin_blocked | | 50 | receiving_money | | 51 | Refund_not_showing_up | | 52 | request_refund | | 53 | reverted_card_payment? | | 54 | supported_cards_and_currencies | | 55 | terminate_account | | 56 | top_up_by_bank_transfer_charge | | 57 | top_up_by_card_charge | | 58 | top_up_by_cash_or_cheque | | 59 | top_up_failed | | 60 | top_up_limits | | 61 | top_up_reverted | | 62 | topping_up_by_card | | 63 | transaction_charged_twice | | 64 | transfer_fee_charged | | 65 | transfer_into_account | | 66 | transfer_not_received_by_recipient | | 67 | transfer_timing | | 68 | unable_to_verify_identity | | 69 | verify_my_identity | | 70 | verify_source_of_funds | | 71 | verify_top_up | | 72 | virtual_card_not_working | | 73 | visa_or_mastercard | | 74 | why_verify_identity | | 75 | wrong_amount_of_cash_received | | 76 | wrong_exchange_rate_for_cash_withdrawal | ### Data Splits | Dataset statistics | Train | Test | | --- | --- | --- | | Number of examples | 10 003 | 3 080 | | Average character length | 59.5 | 54.2 | | Number of intents | 77 | 77 | | Number of domains | 1 | 1 | ## Dataset Creation ### Curation Rationale Previous intent detection datasets such as Web Apps, Ask Ubuntu, the Chatbot Corpus or SNIPS are limited to small number of classes (<10), which oversimplifies the intent detection task and does not emulate the true environment of commercial systems. Although there exist large scale *multi-domain* datasets ([HWU64](https://github.com/xliuhw/NLU-Evaluation-Data) and [CLINC150](https://github.com/clinc/oos-eval)), the examples per each domain may not sufficiently capture the full complexity of each domain as encountered "in the wild". This dataset tries to fill the gap and provides a very fine-grained set of intents in a *single-domain* i.e. **banking**. Its focus on fine-grained single-domain intent detection makes it complementary to the other two multi-domain datasets. ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process The dataset does not contain any additional annotations. #### Who are the annotators? [N/A] ### Personal and Sensitive Information [N/A] ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset it to help develop better intent detection systems. Any comprehensive intent detection evaluation should involve both coarser-grained multi-domain datasets and a fine-grained single-domain dataset such as BANKING77. ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [PolyAI](https://github.com/PolyAI-LDN) ### Licensing Information Creative Commons Attribution 4.0 International ### Citation Information ``` @inproceedings{Casanueva2020, author = {I{\~{n}}igo Casanueva and Tadas Temcinas and Daniela Gerz and Matthew Henderson and Ivan Vulic}, title = {Efficient Intent Detection with Dual Sentence Encoders}, year = {2020}, month = {mar}, note = {Data available at https://github.com/PolyAI-LDN/task-specific-datasets}, url = {https://arxiv.org/abs/2003.04807}, booktitle = {Proceedings of the 2nd Workshop on NLP for ConvAI - ACL 2020} } ``` ### Contributions Thanks to [@dkajtoch](https://github.com/dkajtoch) for adding this dataset.
banking77
[ "task_categories:text-classification", "task_ids:intent-classification", "task_ids:multi-class-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:2003.04807", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["intent-classification", "multi-class-classification"], "pretty_name": "BANKING77", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "activate_my_card", "1": "age_limit", "2": "apple_pay_or_google_pay", "3": "atm_support", "4": "automatic_top_up", "5": "balance_not_updated_after_bank_transfer", "6": "balance_not_updated_after_cheque_or_cash_deposit", "7": "beneficiary_not_allowed", "8": "cancel_transfer", "9": "card_about_to_expire", "10": "card_acceptance", "11": "card_arrival", "12": "card_delivery_estimate", "13": "card_linking", "14": "card_not_working", "15": "card_payment_fee_charged", "16": "card_payment_not_recognised", "17": "card_payment_wrong_exchange_rate", "18": "card_swallowed", "19": "cash_withdrawal_charge", "20": "cash_withdrawal_not_recognised", "21": "change_pin", "22": "compromised_card", "23": "contactless_not_working", "24": "country_support", "25": "declined_card_payment", "26": "declined_cash_withdrawal", "27": "declined_transfer", "28": "direct_debit_payment_not_recognised", "29": "disposable_card_limits", "30": "edit_personal_details", "31": "exchange_charge", "32": "exchange_rate", "33": "exchange_via_app", "34": "extra_charge_on_statement", "35": "failed_transfer", "36": "fiat_currency_support", "37": "get_disposable_virtual_card", "38": "get_physical_card", "39": "getting_spare_card", "40": "getting_virtual_card", "41": "lost_or_stolen_card", "42": "lost_or_stolen_phone", "43": "order_physical_card", "44": "passcode_forgotten", "45": "pending_card_payment", "46": "pending_cash_withdrawal", "47": "pending_top_up", "48": "pending_transfer", "49": "pin_blocked", "50": "receiving_money", "51": "Refund_not_showing_up", "52": "request_refund", "53": "reverted_card_payment?", "54": "supported_cards_and_currencies", "55": "terminate_account", "56": "top_up_by_bank_transfer_charge", "57": "top_up_by_card_charge", "58": "top_up_by_cash_or_cheque", "59": "top_up_failed", "60": "top_up_limits", "61": "top_up_reverted", "62": "topping_up_by_card", "63": "transaction_charged_twice", "64": "transfer_fee_charged", "65": "transfer_into_account", "66": "transfer_not_received_by_recipient", "67": "transfer_timing", "68": "unable_to_verify_identity", "69": "verify_my_identity", "70": "verify_source_of_funds", "71": "verify_top_up", "72": "virtual_card_not_working", "73": "visa_or_mastercard", "74": "why_verify_identity", "75": "wrong_amount_of_cash_received", "76": "wrong_exchange_rate_for_cash_withdrawal"}}}}], "splits": [{"name": "train", "num_bytes": 715028, "num_examples": 10003}, {"name": "test", "num_bytes": 204010, "num_examples": 3080}], "download_size": 392040, "dataset_size": 919038}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "train-eval-index": [{"config": "default", "task": "text-classification", "task_id": "multi_class_classification", "splits": {"train_split": "train", "eval_split": "test"}, "col_mapping": {"text": "text", "label": "target"}, "metrics": [{"type": "accuracy", "name": "Accuracy"}, {"type": "f1", "name": "F1 macro", "args": {"average": "macro"}}, {"type": "f1", "name": "F1 micro", "args": {"average": "micro"}}, {"type": "f1", "name": "F1 weighted", "args": {"average": "weighted"}}, {"type": "precision", "name": "Precision macro", "args": {"average": "macro"}}, {"type": "precision", "name": "Precision micro", "args": {"average": "micro"}}, {"type": "precision", "name": "Precision weighted", "args": {"average": "weighted"}}, {"type": "recall", "name": "Recall macro", "args": {"average": "macro"}}, {"type": "recall", "name": "Recall micro", "args": {"average": "micro"}}, {"type": "recall", "name": "Recall weighted", "args": {"average": "weighted"}}]}]}
2024-01-10T08:23:17+00:00
[ "2003.04807" ]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-intent-classification #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-2003.04807 #region-us
Dataset Card for BANKING77 ========================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: Github * Repository: Github * Paper: ArXiv * Leaderboard: * Point of Contact: ### Dataset Summary **Deprecated:** Dataset "banking77" is deprecated and will be deleted. Use "<a href="URL instead.</p> Dataset composed of online banking queries annotated with their corresponding intents. BANKING77 dataset provides a very fine-grained set of intents in a banking domain. It comprises 13,083 customer service queries labeled with 77 intents. It focuses on fine-grained single-domain intent detection. ### Supported Tasks and Leaderboards Intent classification, intent detection ### Languages English Dataset Structure ----------------- ### Data Instances An example of 'train' looks as follows: ### Data Fields * 'text': a string feature. * 'label': One of classification labels (0-76) corresponding to unique intents. Intent names are mapped to 'label' in the following way: ### Data Splits Dataset statistics: Number of examples, Train: 10 003, Test: 3 080 Dataset statistics: Average character length, Train: 59.5, Test: 54.2 Dataset statistics: Number of intents, Train: 77, Test: 77 Dataset statistics: Number of domains, Train: 1, Test: 1 Dataset Creation ---------------- ### Curation Rationale Previous intent detection datasets such as Web Apps, Ask Ubuntu, the Chatbot Corpus or SNIPS are limited to small number of classes (<10), which oversimplifies the intent detection task and does not emulate the true environment of commercial systems. Although there exist large scale *multi-domain* datasets (HWU64 and CLINC150), the examples per each domain may not sufficiently capture the full complexity of each domain as encountered "in the wild". This dataset tries to fill the gap and provides a very fine-grained set of intents in a *single-domain* i.e. banking. Its focus on fine-grained single-domain intent detection makes it complementary to the other two multi-domain datasets. ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process The dataset does not contain any additional annotations. #### Who are the annotators? [N/A] ### Personal and Sensitive Information [N/A] Considerations for Using the Data --------------------------------- ### Social Impact of Dataset The purpose of this dataset it to help develop better intent detection systems. Any comprehensive intent detection evaluation should involve both coarser-grained multi-domain datasets and a fine-grained single-domain dataset such as BANKING77. ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators PolyAI ### Licensing Information Creative Commons Attribution 4.0 International ### Contributions Thanks to @dkajtoch for adding this dataset.
[ "### Dataset Summary\n\n\n\n**Deprecated:** Dataset \"banking77\" is deprecated and will be deleted. Use \"<a href=\"URL instead.</p>\n\n\n\nDataset composed of online banking queries annotated with their corresponding intents.\n\n\nBANKING77 dataset provides a very fine-grained set of intents in a banking domain.\nIt comprises 13,083 customer service queries labeled with 77 intents.\nIt focuses on fine-grained single-domain intent detection.", "### Supported Tasks and Leaderboards\n\n\nIntent classification, intent detection", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'train' looks as follows:", "### Data Fields\n\n\n* 'text': a string feature.\n* 'label': One of classification labels (0-76) corresponding to unique intents.\n\n\nIntent names are mapped to 'label' in the following way:", "### Data Splits\n\n\nDataset statistics: Number of examples, Train: 10 003, Test: 3 080\nDataset statistics: Average character length, Train: 59.5, Test: 54.2\nDataset statistics: Number of intents, Train: 77, Test: 77\nDataset statistics: Number of domains, Train: 1, Test: 1\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nPrevious intent detection datasets such as Web Apps, Ask Ubuntu, the Chatbot Corpus or SNIPS are limited to small number of classes (<10), which oversimplifies the intent detection task and does not emulate the true environment of commercial systems. Although there exist large scale *multi-domain* datasets (HWU64 and CLINC150), the examples per each domain may not sufficiently capture the full complexity of each domain as encountered \"in the wild\". This dataset tries to fill the gap and provides a very fine-grained set of intents in a *single-domain* i.e. banking. Its focus on fine-grained single-domain intent detection makes it complementary to the other two multi-domain datasets.", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\n\nThe dataset does not contain any additional annotations.", "#### Who are the annotators?\n\n\n[N/A]", "### Personal and Sensitive Information\n\n\n[N/A]\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nThe purpose of this dataset it to help develop better intent detection systems.\n\n\nAny comprehensive intent detection evaluation should involve both coarser-grained multi-domain datasets and a fine-grained single-domain dataset such as BANKING77.", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nPolyAI", "### Licensing Information\n\n\nCreative Commons Attribution 4.0 International", "### Contributions\n\n\nThanks to @dkajtoch for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-intent-classification #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-2003.04807 #region-us \n", "### Dataset Summary\n\n\n\n**Deprecated:** Dataset \"banking77\" is deprecated and will be deleted. Use \"<a href=\"URL instead.</p>\n\n\n\nDataset composed of online banking queries annotated with their corresponding intents.\n\n\nBANKING77 dataset provides a very fine-grained set of intents in a banking domain.\nIt comprises 13,083 customer service queries labeled with 77 intents.\nIt focuses on fine-grained single-domain intent detection.", "### Supported Tasks and Leaderboards\n\n\nIntent classification, intent detection", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of 'train' looks as follows:", "### Data Fields\n\n\n* 'text': a string feature.\n* 'label': One of classification labels (0-76) corresponding to unique intents.\n\n\nIntent names are mapped to 'label' in the following way:", "### Data Splits\n\n\nDataset statistics: Number of examples, Train: 10 003, Test: 3 080\nDataset statistics: Average character length, Train: 59.5, Test: 54.2\nDataset statistics: Number of intents, Train: 77, Test: 77\nDataset statistics: Number of domains, Train: 1, Test: 1\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nPrevious intent detection datasets such as Web Apps, Ask Ubuntu, the Chatbot Corpus or SNIPS are limited to small number of classes (<10), which oversimplifies the intent detection task and does not emulate the true environment of commercial systems. Although there exist large scale *multi-domain* datasets (HWU64 and CLINC150), the examples per each domain may not sufficiently capture the full complexity of each domain as encountered \"in the wild\". This dataset tries to fill the gap and provides a very fine-grained set of intents in a *single-domain* i.e. banking. Its focus on fine-grained single-domain intent detection makes it complementary to the other two multi-domain datasets.", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\n\nThe dataset does not contain any additional annotations.", "#### Who are the annotators?\n\n\n[N/A]", "### Personal and Sensitive Information\n\n\n[N/A]\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nThe purpose of this dataset it to help develop better intent detection systems.\n\n\nAny comprehensive intent detection evaluation should involve both coarser-grained multi-domain datasets and a fine-grained single-domain dataset such as BANKING77.", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nPolyAI", "### Licensing Information\n\n\nCreative Commons Attribution 4.0 International", "### Contributions\n\n\nThanks to @dkajtoch for adding this dataset." ]
[ 113, 119, 18, 12, 18, 50, 85, 185, 4, 10, 10, 5, 17, 14, 23, 63, 8, 14, 8, 11, 17 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-intent-classification #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-2003.04807 #region-us \n### Dataset Summary\n\n\n\n**Deprecated:** Dataset \"banking77\" is deprecated and will be deleted. Use \"<a href=\"URL instead.</p>\n\n\n\nDataset composed of online banking queries annotated with their corresponding intents.\n\n\nBANKING77 dataset provides a very fine-grained set of intents in a banking domain.\nIt comprises 13,083 customer service queries labeled with 77 intents.\nIt focuses on fine-grained single-domain intent detection.### Supported Tasks and Leaderboards\n\n\nIntent classification, intent detection### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nAn example of 'train' looks as follows:### Data Fields\n\n\n* 'text': a string feature.\n* 'label': One of classification labels (0-76) corresponding to unique intents.\n\n\nIntent names are mapped to 'label' in the following way:### Data Splits\n\n\nDataset statistics: Number of examples, Train: 10 003, Test: 3 080\nDataset statistics: Average character length, Train: 59.5, Test: 54.2\nDataset statistics: Number of intents, Train: 77, Test: 77\nDataset statistics: Number of domains, Train: 1, Test: 1\n\n\nDataset Creation\n----------------" ]
f9dde1200348af9b531e8fd09096bd9f9ddfeb34
# Dataset Card for "bbaw_egyptian" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://edoc.bbaw.de/frontdoor/index/index/docId/2919](https://edoc.bbaw.de/frontdoor/index/index/docId/2919) - **Repository:** [Github](https://phiwi.github.io/all.json) - **Paper:** [Multi-Task Modeling of Phonographic Languages: Translating Middle Egyptian Hieroglyph](https://zenodo.org/record/3524924) - **Point of Contact:** [Philipp Wiesenbach](https://www.cl.uni-heidelberg.de/~wiesenbach/index.html) - **Size of downloaded dataset files:** 35.65 MB ### Dataset Summary This dataset comprises parallel sentences of hieroglyphic encodings, transcription and translation as used in the paper [Multi-Task Modeling of Phonographic Languages: Translating Middle Egyptian Hieroglyph](https://zenodo.org/record/3524924). The data triples are extracted from the [digital corpus of Egyptian texts](https://edoc.bbaw.de/frontdoor/index/index/docId/2919) compiled by the project "Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache". ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages The dataset consists of parallel triples of - `hieroglyphs`: [Encoding of the hieroglyphs with the [Gardiner's sign list](https://en.wikipedia.org/wiki/Gardiner%27s_sign_list) - `transcription`: Transliteration of the above mentioned hieroglyphs with a [transliteration scheme](https://en.wikipedia.org/wiki/Transliteration_of_Ancient_Egyptian) - `translation`: Translation in mostly German language (with some English mixed in) ## Dataset Structure The dataset is not divided into 'train', 'dev' and 'test' splits as it was not built for competitive purposes and we encourage all scientists to use individual partitioning schemes to suit their needs (due to the low resource setting it might be advisable to use cross validation anyway). The only available split 'all' therefore comprises the full 100,708 translation triples, 35,503 of which possess hieroglyphic encodings (the remaining 65,205 triples have empty `hieroglyph` entries). ### Data Instances An example of a data triple looks the following way: ``` { "transcription": "n rḏi̯(.w) gꜣ =j r dbḥ.t m pr-ḥḏ", "translation": "I was not let to suffer lack in the treasury with respect to what was needed;", "hieroglyphs": "D35 D21 -D37 G1&W11 -V32B A1 D21 D46 -D58 *V28 -F18 *X1 -A2 G17 [? *O2 *?]" } ``` *Important*: Only about a third of the instance actually cover hieroglyphic encodings (the rest is the empty string `""`) as the leftover encodings have not yet been incorporated into the BBAW's project database. ### Data Fields #### plain_text - `transcription`: a `string` feature. - `translation`: a `string` feature. - `hieroglyphs`: a `string` feature. ### Data Splits | name |all| |----------|----:| |plain_text|100708| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization The data source comes from the project "Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache" which is compiling an extensively annotated digital corpus of Egyptian texts. Their [publication](https://edoc.bbaw.de/frontdoor/index/index/docId/2919) comprises an excerpt of the internal database's contents. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process The corpus has not been preprocessed as we encourage every scientist to prepare the corpus to their desired needs. This means, that all textcritic symbols are still included in the transliteration and translation. This concerns the following annotations: - `()`: defective - `[]`: lost - `{}`: surplus - `〈〉`: omitted - `⸢⸣`: damaged - `⸮?`: unclear - `{{}}`: erasure - `(())`: above - `[[]]`: overstrike - `〈〈〉〉`: haplography Their exists a similar sign list for the annotation of the hieroglyphic encoding. If you wish access to this list, please get in contact with the author. #### Who are the annotators? AV Altägyptisches Wörterbuch (https://www.bbaw.de/forschung/altaegyptisches-woerterbuch), AV Wortschatz der ägyptischen Sprache (https://www.bbaw.de/en/research/vocabulary-of-the-egyptian-language, https://aaew.bbaw.de); Burkhard Backes, Susanne Beck, Anke Blöbaum, Angela Böhme, Marc Brose, Adelheid Burkhardt, Roberto A. Díaz Hernández, Peter Dils, Roland Enmarch, Frank Feder, Heinz Felber, Silke Grallert, Stefan Grunert, Ingelore Hafemann, Anne Herzberg, John M. Iskander, Ines Köhler, Maxim Kupreyev, Renata Landgrafova, Verena Lepper, Lutz Popko, Alexander Schütze, Simon Schweitzer, Stephan Seidlmayer, Gunnar Sperveslage, Susanne Töpfer, Doris Topmann, Anja Weber ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information CC BY-SA 4.0 Deed Attribution-ShareAlike 4.0 International https://creativecommons.org/licenses/by-sa/4.0/ ### Citation Information Source corpus: ``` @misc{BerlinBrandenburgischeAkademiederWissenschaften2018, editor = {{Berlin-Brandenburgische Akademie der Wissenschaften} and {Sächsische Akademie der Wissenschaften zu Leipzig} and Richter, Tonio Sebastian and Hafemann, Ingelore and Hans-Werner Fischer-Elfert and Peter Dils}, year = {2018}, title = {Teilauszug der Datenbank des Vorhabens {\dq}Strukturen und Transformationen des Wortschatzes der {\"a}gyptischen Sprache{\dq} vom Januar 2018}, url = {https://nbn-resolving.org/urn:nbn:de:kobv:b4-opus4-29190}, keywords = {493;932;{\"A}gyptische Sprache;Korpus}, abstract = {The research project {\dq}Strukturen und Transformationen des Wortschatzes der {\{\dq}a}gyptischen Sprache{\dq} at the Berlin-Brandenburgische Akademie der Wissenschaften compiles an extensively annotated digital corpus of Egyptian texts. This publication comprises an excerpt of the internal database's contents. Its JSON encoded entries require approximately 800 MB of disk space after decompression.}, location = {Berlin}, organization = {{Berlin-Brandenburgische Akademie der Wissenschaften} and {Sächsische Akademie der Wissenschaften zu Leipzig}}, subtitle = {Database snapshot of project {\dq}Strukturen und Transformationen des Wortschatzes der {\"a}gyptischen Sprache{\dq} (excerpt from January 2018)} } ``` Translation paper: ``` @article{wiesenbach19, title = {Multi-Task Modeling of Phonographic Languages: Translating Middle Egyptian Hieroglyphs}, author = {Wiesenbach, Philipp and Riezler, Stefan}, journal = {Proceedings of the International Workshop on Spoken Language Translation}, journal-abbrev = {IWSLT}, year = {2019}, url = {https://www.cl.uni-heidelberg.de/statnlpgroup/publications/IWSLT2019_v2.pdf} } ``` ### Contributions Thanks to [@phiwi](https://github.com/phiwi) for adding this dataset.
bbaw_egyptian
[ "task_categories:translation", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:multilingual", "size_categories:100K<n<1M", "source_datasets:extended|wikipedia", "language:egy", "language:de", "language:en", "license:cc-by-sa-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["egy", "de", "en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["multilingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|wikipedia"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "BBAW, Thesaurus Linguae Aegyptiae, Ancient Egyptian (2018)", "dataset_info": {"features": [{"name": "transcription", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "hieroglyphs", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 18533905, "num_examples": 100736}], "download_size": 9746860, "dataset_size": 18533905}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-10T08:24:41+00:00
[]
[ "egy", "de", "en" ]
TAGS #task_categories-translation #annotations_creators-expert-generated #language_creators-found #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-extended|wikipedia #language-Egyptian (Ancient) #language-German #language-English #license-cc-by-sa-4.0 #region-us
Dataset Card for "bbaw\_egyptian" ================================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: Github * Paper: Multi-Task Modeling of Phonographic Languages: Translating Middle Egyptian Hieroglyph * Point of Contact: Philipp Wiesenbach * Size of downloaded dataset files: 35.65 MB ### Dataset Summary This dataset comprises parallel sentences of hieroglyphic encodings, transcription and translation as used in the paper Multi-Task Modeling of Phonographic Languages: Translating Middle Egyptian Hieroglyph. The data triples are extracted from the digital corpus of Egyptian texts compiled by the project "Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache". ### Supported Tasks and Leaderboards ### Languages The dataset consists of parallel triples of * 'hieroglyphs': Encoding of the hieroglyphs with the [Gardiner's sign list * 'transcription': Transliteration of the above mentioned hieroglyphs with a transliteration scheme * 'translation': Translation in mostly German language (with some English mixed in) Dataset Structure ----------------- The dataset is not divided into 'train', 'dev' and 'test' splits as it was not built for competitive purposes and we encourage all scientists to use individual partitioning schemes to suit their needs (due to the low resource setting it might be advisable to use cross validation anyway). The only available split 'all' therefore comprises the full 100,708 translation triples, 35,503 of which possess hieroglyphic encodings (the remaining 65,205 triples have empty 'hieroglyph' entries). ### Data Instances An example of a data triple looks the following way: *Important*: Only about a third of the instance actually cover hieroglyphic encodings (the rest is the empty string '""') as the leftover encodings have not yet been incorporated into the BBAW's project database. ### Data Fields #### plain\_text * 'transcription': a 'string' feature. * 'translation': a 'string' feature. * 'hieroglyphs': a 'string' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization The data source comes from the project "Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache" which is compiling an extensively annotated digital corpus of Egyptian texts. Their publication comprises an excerpt of the internal database's contents. #### Who are the source language producers? ### Annotations #### Annotation process The corpus has not been preprocessed as we encourage every scientist to prepare the corpus to their desired needs. This means, that all textcritic symbols are still included in the transliteration and translation. This concerns the following annotations: * '()': defective * '[]': lost * '{}': surplus * '〈〉': omitted * '⸢⸣': damaged * '⸮?': unclear * '{{}}': erasure * '(())': above * '[[]]': overstrike * '〈〈〉〉': haplography Their exists a similar sign list for the annotation of the hieroglyphic encoding. If you wish access to this list, please get in contact with the author. #### Who are the annotators? AV Altägyptisches Wörterbuch (URL AV Wortschatz der ägyptischen Sprache (URL URL); Burkhard Backes, Susanne Beck, Anke Blöbaum, Angela Böhme, Marc Brose, Adelheid Burkhardt, Roberto A. Díaz Hernández, Peter Dils, Roland Enmarch, Frank Feder, Heinz Felber, Silke Grallert, Stefan Grunert, Ingelore Hafemann, Anne Herzberg, John M. Iskander, Ines Köhler, Maxim Kupreyev, Renata Landgrafova, Verena Lepper, Lutz Popko, Alexander Schütze, Simon Schweitzer, Stephan Seidlmayer, Gunnar Sperveslage, Susanne Töpfer, Doris Topmann, Anja Weber ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information CC BY-SA 4.0 Deed Attribution-ShareAlike 4.0 International URL Source corpus: Translation paper: ### Contributions Thanks to @phiwi for adding this dataset.
[ "### Dataset Summary\n\n\nThis dataset comprises parallel sentences of hieroglyphic encodings, transcription and translation as used in the paper Multi-Task Modeling of Phonographic Languages: Translating Middle Egyptian Hieroglyph. The data triples are extracted from the digital corpus of Egyptian texts compiled by the project \"Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache\".", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe dataset consists of parallel triples of\n\n\n* 'hieroglyphs': Encoding of the hieroglyphs with the [Gardiner's sign list\n* 'transcription': Transliteration of the above mentioned hieroglyphs with a transliteration scheme\n* 'translation': Translation in mostly German language (with some English mixed in)\n\n\nDataset Structure\n-----------------\n\n\nThe dataset is not divided into 'train', 'dev' and 'test' splits as it was not built for competitive purposes and we encourage all scientists to use individual partitioning schemes to suit their needs (due to the low resource setting it might be advisable to use cross validation anyway). The only available split 'all' therefore comprises the full 100,708 translation triples, 35,503 of which possess hieroglyphic encodings (the remaining 65,205 triples have empty 'hieroglyph' entries).", "### Data Instances\n\n\nAn example of a data triple looks the following way:\n\n\n*Important*: Only about a third of the instance actually cover hieroglyphic encodings (the rest is the empty string '\"\"') as the leftover encodings have not yet been incorporated into the BBAW's project database.", "### Data Fields", "#### plain\\_text\n\n\n* 'transcription': a 'string' feature.\n* 'translation': a 'string' feature.\n* 'hieroglyphs': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe data source comes from the project \"Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache\" which is compiling an extensively annotated digital corpus of Egyptian texts. Their publication comprises an excerpt of the internal database's contents.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\n\nThe corpus has not been preprocessed as we encourage every scientist to prepare the corpus to their desired needs. This means, that all textcritic symbols are still included in the transliteration and translation. This concerns the following annotations:\n\n\n* '()': defective\n* '[]': lost\n* '{}': surplus\n* '〈〉': omitted\n* '⸢⸣': damaged\n* '⸮?': unclear\n* '{{}}': erasure\n* '(())': above\n* '[[]]': overstrike\n* '〈〈〉〉': haplography\n\n\nTheir exists a similar sign list for the annotation of the hieroglyphic encoding. If you wish access to this list, please get in contact with the author.", "#### Who are the annotators?\n\n\nAV Altägyptisches Wörterbuch (URL AV Wortschatz der ägyptischen Sprache (URL URL);\nBurkhard Backes, Susanne Beck, Anke Blöbaum, Angela Böhme, Marc Brose, Adelheid Burkhardt, Roberto A. Díaz Hernández, Peter Dils, Roland Enmarch, Frank Feder, Heinz Felber, Silke Grallert, Stefan Grunert, Ingelore Hafemann, Anne Herzberg, John M. Iskander, Ines Köhler, Maxim Kupreyev, Renata Landgrafova, Verena Lepper, Lutz Popko, Alexander Schütze, Simon Schweitzer, Stephan Seidlmayer, Gunnar Sperveslage, Susanne Töpfer, Doris Topmann, Anja Weber", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCC BY-SA 4.0 Deed Attribution-ShareAlike 4.0 International URL\n\n\nSource corpus:\n\n\nTranslation paper:", "### Contributions\n\n\nThanks to @phiwi for adding this dataset." ]
[ "TAGS\n#task_categories-translation #annotations_creators-expert-generated #language_creators-found #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-extended|wikipedia #language-Egyptian (Ancient) #language-German #language-English #license-cc-by-sa-4.0 #region-us \n", "### Dataset Summary\n\n\nThis dataset comprises parallel sentences of hieroglyphic encodings, transcription and translation as used in the paper Multi-Task Modeling of Phonographic Languages: Translating Middle Egyptian Hieroglyph. The data triples are extracted from the digital corpus of Egyptian texts compiled by the project \"Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache\".", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe dataset consists of parallel triples of\n\n\n* 'hieroglyphs': Encoding of the hieroglyphs with the [Gardiner's sign list\n* 'transcription': Transliteration of the above mentioned hieroglyphs with a transliteration scheme\n* 'translation': Translation in mostly German language (with some English mixed in)\n\n\nDataset Structure\n-----------------\n\n\nThe dataset is not divided into 'train', 'dev' and 'test' splits as it was not built for competitive purposes and we encourage all scientists to use individual partitioning schemes to suit their needs (due to the low resource setting it might be advisable to use cross validation anyway). The only available split 'all' therefore comprises the full 100,708 translation triples, 35,503 of which possess hieroglyphic encodings (the remaining 65,205 triples have empty 'hieroglyph' entries).", "### Data Instances\n\n\nAn example of a data triple looks the following way:\n\n\n*Important*: Only about a third of the instance actually cover hieroglyphic encodings (the rest is the empty string '\"\"') as the leftover encodings have not yet been incorporated into the BBAW's project database.", "### Data Fields", "#### plain\\_text\n\n\n* 'transcription': a 'string' feature.\n* 'translation': a 'string' feature.\n* 'hieroglyphs': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe data source comes from the project \"Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache\" which is compiling an extensively annotated digital corpus of Egyptian texts. Their publication comprises an excerpt of the internal database's contents.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\n\nThe corpus has not been preprocessed as we encourage every scientist to prepare the corpus to their desired needs. This means, that all textcritic symbols are still included in the transliteration and translation. This concerns the following annotations:\n\n\n* '()': defective\n* '[]': lost\n* '{}': surplus\n* '〈〉': omitted\n* '⸢⸣': damaged\n* '⸮?': unclear\n* '{{}}': erasure\n* '(())': above\n* '[[]]': overstrike\n* '〈〈〉〉': haplography\n\n\nTheir exists a similar sign list for the annotation of the hieroglyphic encoding. If you wish access to this list, please get in contact with the author.", "#### Who are the annotators?\n\n\nAV Altägyptisches Wörterbuch (URL AV Wortschatz der ägyptischen Sprache (URL URL);\nBurkhard Backes, Susanne Beck, Anke Blöbaum, Angela Böhme, Marc Brose, Adelheid Burkhardt, Roberto A. Díaz Hernández, Peter Dils, Roland Enmarch, Frank Feder, Heinz Felber, Silke Grallert, Stefan Grunert, Ingelore Hafemann, Anne Herzberg, John M. Iskander, Ines Köhler, Maxim Kupreyev, Renata Landgrafova, Verena Lepper, Lutz Popko, Alexander Schütze, Simon Schweitzer, Stephan Seidlmayer, Gunnar Sperveslage, Susanne Töpfer, Doris Topmann, Anja Weber", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCC BY-SA 4.0 Deed Attribution-ShareAlike 4.0 International URL\n\n\nSource corpus:\n\n\nTranslation paper:", "### Contributions\n\n\nThanks to @phiwi for adding this dataset." ]
[ 97, 99, 10, 218, 76, 5, 45, 11, 7, 4, 74, 10, 5, 182, 177, 18, 7, 8, 14, 6, 25, 16 ]
[ "passage: TAGS\n#task_categories-translation #annotations_creators-expert-generated #language_creators-found #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-extended|wikipedia #language-Egyptian (Ancient) #language-German #language-English #license-cc-by-sa-4.0 #region-us \n### Dataset Summary\n\n\nThis dataset comprises parallel sentences of hieroglyphic encodings, transcription and translation as used in the paper Multi-Task Modeling of Phonographic Languages: Translating Middle Egyptian Hieroglyph. The data triples are extracted from the digital corpus of Egyptian texts compiled by the project \"Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache\".### Supported Tasks and Leaderboards### Languages\n\n\nThe dataset consists of parallel triples of\n\n\n* 'hieroglyphs': Encoding of the hieroglyphs with the [Gardiner's sign list\n* 'transcription': Transliteration of the above mentioned hieroglyphs with a transliteration scheme\n* 'translation': Translation in mostly German language (with some English mixed in)\n\n\nDataset Structure\n-----------------\n\n\nThe dataset is not divided into 'train', 'dev' and 'test' splits as it was not built for competitive purposes and we encourage all scientists to use individual partitioning schemes to suit their needs (due to the low resource setting it might be advisable to use cross validation anyway). The only available split 'all' therefore comprises the full 100,708 translation triples, 35,503 of which possess hieroglyphic encodings (the remaining 65,205 triples have empty 'hieroglyph' entries).### Data Instances\n\n\nAn example of a data triple looks the following way:\n\n\n*Important*: Only about a third of the instance actually cover hieroglyphic encodings (the rest is the empty string '\"\"') as the leftover encodings have not yet been incorporated into the BBAW's project database.### Data Fields", "passage: #### plain\\_text\n\n\n* 'transcription': a 'string' feature.\n* 'translation': a 'string' feature.\n* 'hieroglyphs': a 'string' feature.### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization\n\n\nThe data source comes from the project \"Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache\" which is compiling an extensively annotated digital corpus of Egyptian texts. Their publication comprises an excerpt of the internal database's contents.#### Who are the source language producers?### Annotations#### Annotation process\n\n\nThe corpus has not been preprocessed as we encourage every scientist to prepare the corpus to their desired needs. This means, that all textcritic symbols are still included in the transliteration and translation. This concerns the following annotations:\n\n\n* '()': defective\n* '[]': lost\n* '{}': surplus\n* '〈〉': omitted\n* '⸢⸣': damaged\n* '⸮?': unclear\n* '{{}}': erasure\n* '(())': above\n* '[[]]': overstrike\n* '〈〈〉〉': haplography\n\n\nTheir exists a similar sign list for the annotation of the hieroglyphic encoding. If you wish access to this list, please get in contact with the author.#### Who are the annotators?\n\n\nAV Altägyptisches Wörterbuch (URL AV Wortschatz der ägyptischen Sprache (URL URL);\nBurkhard Backes, Susanne Beck, Anke Blöbaum, Angela Böhme, Marc Brose, Adelheid Burkhardt, Roberto A. Díaz Hernández, Peter Dils, Roland Enmarch, Frank Feder, Heinz Felber, Silke Grallert, Stefan Grunert, Ingelore Hafemann, Anne Herzberg, John M. Iskander, Ines Köhler, Maxim Kupreyev, Renata Landgrafova, Verena Lepper, Lutz Popko, Alexander Schütze, Simon Schweitzer, Stephan Seidlmayer, Gunnar Sperveslage, Susanne Töpfer, Doris Topmann, Anja Weber### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases" ]
bca982bebdd497ab9078feda251111aac4874318
# Dataset Card for BBC Hindi NLI Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [GitHub](https://github.com/midas-research/hindi-nli-data) - **Paper:** [Aclweb](https://www.aclweb.org/anthology/2020.aacl-main.71) - **Point of Contact:** [GitHub](https://github.com/midas-research/hindi-nli-data) ### Dataset Summary - Dataset for Natural Language Inference in Hindi Language. BBC Hindi Dataset consists of textual-entailment pairs. - Each row of the Datasets if made up of 4 columns - Premise, Hypothesis, Label and Topic. - Context and Hypothesis is written in Hindi while Entailment_Label is in English. - Entailment_label is of 2 types - entailed and not-entailed. - Dataset can be used to train models for Natural Language Inference tasks in Hindi Language. [More Information Needed] ### Supported Tasks and Leaderboards - Natural Language Inference for Hindi ### Languages Dataset is in Hindi ## Dataset Structure - Data is structured in TSV format. - Train and Test files are in seperate files ### Dataset Instances An example of 'train' looks as follows. ``` {'hypothesis': 'यह खबर की सूचना है|', 'label': 'entailed', 'premise': 'गोपनीयता की नीति', 'topic': '1'} ``` ### Data Fields - Each row contatins 4 columns - Premise, Hypothesis, Label and Topic. ### Data Splits - Train : 15553 - Valid : 2581 - Test : 2593 ## Dataset Creation - We employ a recasting technique from Poliak et al. (2018a,b) to convert publicly available BBC Hindi news text classification datasets in Hindi and pose them as TE problems - In this recasting process, we build template hypotheses for each class in the label taxonomy - Then, we pair the original annotated sentence with each of the template hypotheses to create TE samples. - For more information on the recasting process, refer to paper "https://www.aclweb.org/anthology/2020.aacl-main.71" ### Source Data Source Dataset for the recasting process is the BBC Hindi Headlines Dataset(https://github.com/NirantK/hindi2vec/releases/tag/bbc-hindi-v0.1) #### Initial Data Collection and Normalization - BBC Hindi News Classification Dataset contains 4, 335 Hindi news headlines tagged across 14 categories: India, Pakistan,news, International, entertainment, sport, science, China, learning english, social, southasia, business, institutional, multimedia - We processed this dataset to combine two sets of relevant but low prevalence classes. - Namely, we merged the samples from Pakistan, China, international, and southasia as one class called international. - Likewise, we also merged samples from news, business, social, learning english, and institutional as news. - Lastly, we also removed the class multimedia because there were very few samples. #### Who are the source language producers? Pls refer to this paper: "https://www.aclweb.org/anthology/2020.aacl-main.71" ### Annotations #### Annotation process Annotation process has been described in Dataset Creation Section. #### Who are the annotators? Annotation is done automatically. ### Personal and Sensitive Information No Personal and Sensitive Information is mentioned in the Datasets. ## Considerations for Using the Data Pls refer to this paper: https://www.aclweb.org/anthology/2020.aacl-main.71 ### Discussion of Biases Pls refer to this paper: https://www.aclweb.org/anthology/2020.aacl-main.71 ### Other Known Limitations No other known limitations ## Additional Information Pls refer to this link: https://github.com/midas-research/hindi-nli-data ### Dataset Curators It is written in the repo : https://github.com/avinsit123/hindi-nli-data that - This corpus can be used freely for research purposes. - The paper listed below provide details of the creation and use of the corpus. If you use the corpus, then please cite the paper. - If interested in commercial use of the corpus, send email to midas@iiitd.ac.in. - If you use the corpus in a product or application, then please credit the authors and Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi appropriately. Also, if you send us an email, we will be thrilled to know about how you have used the corpus. - Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India disclaims any responsibility for the use of the corpus and does not provide technical support. However, the contact listed above will be happy to respond to queries and clarifications. - Rather than redistributing the corpus, please direct interested parties to this page - Please feel free to send us an email: - with feedback regarding the corpus. - with information on how you have used the corpus. - if interested in having us analyze your data for natural language inference. - if interested in a collaborative research project. ### Licensing Information Copyright (C) 2019 Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi (MIDAS, IIIT-Delhi). Pls contact authors for any information on the dataset. ### Citation Information ``` @inproceedings{uppal-etal-2020-two, title = "Two-Step Classification using Recasted Data for Low Resource Settings", author = "Uppal, Shagun and Gupta, Vivek and Swaminathan, Avinash and Zhang, Haimin and Mahata, Debanjan and Gosangi, Rakesh and Shah, Rajiv Ratn and Stent, Amanda", booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing", month = dec, year = "2020", address = "Suzhou, China", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.aacl-main.71", pages = "706--719", abstract = "An NLP model{'}s ability to reason should be independent of language. Previous works utilize Natural Language Inference (NLI) to understand the reasoning ability of models, mostly focusing on high resource languages like English. To address scarcity of data in low-resource languages such as Hindi, we use data recasting to create NLI datasets for four existing text classification datasets. Through experiments, we show that our recasted dataset is devoid of statistical irregularities and spurious patterns. We further study the consistency in predictions of the textual entailment models and propose a consistency regulariser to remove pairwise-inconsistencies in predictions. We propose a novel two-step classification method which uses textual-entailment predictions for classification task. We further improve the performance by using a joint-objective for classification and textual entailment. We therefore highlight the benefits of data recasting and improvements on classification performance using our approach with supporting experimental results.", } ``` ### Contributions Thanks to [@avinsit123](https://github.com/avinsit123) for adding this dataset.
bbc_hindi_nli
[ "task_categories:text-classification", "task_ids:natural-language-inference", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|bbc__hindi_news_classification", "language:hi", "license:mit", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": ["hi"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|bbc__hindi_news_classification"], "task_categories": ["text-classification"], "task_ids": ["natural-language-inference"], "pretty_name": "BBC Hindi NLI Dataset", "dataset_info": {"config_name": "bbc hindi nli", "features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "not-entailment", "1": "entailment"}}}}, {"name": "topic", "dtype": {"class_label": {"names": {"0": "india", "1": "news", "2": "international", "3": "entertainment", "4": "sport", "5": "science"}}}}], "splits": [{"name": "train", "num_bytes": 2990064, "num_examples": 15552}, {"name": "validation", "num_bytes": 496800, "num_examples": 2580}, {"name": "test", "num_bytes": 494424, "num_examples": 2592}], "download_size": 309124, "dataset_size": 3981288}, "configs": [{"config_name": "bbc hindi nli", "data_files": [{"split": "train", "path": "bbc hindi nli/train-*"}, {"split": "validation", "path": "bbc hindi nli/validation-*"}, {"split": "test", "path": "bbc hindi nli/test-*"}], "default": true}]}
2024-01-10T10:00:44+00:00
[]
[ "hi" ]
TAGS #task_categories-text-classification #task_ids-natural-language-inference #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|bbc__hindi_news_classification #language-Hindi #license-mit #region-us
# Dataset Card for BBC Hindi NLI Dataset ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Repository: GitHub - Paper: Aclweb - Point of Contact: GitHub ### Dataset Summary - Dataset for Natural Language Inference in Hindi Language. BBC Hindi Dataset consists of textual-entailment pairs. - Each row of the Datasets if made up of 4 columns - Premise, Hypothesis, Label and Topic. - Context and Hypothesis is written in Hindi while Entailment_Label is in English. - Entailment_label is of 2 types - entailed and not-entailed. - Dataset can be used to train models for Natural Language Inference tasks in Hindi Language. ### Supported Tasks and Leaderboards - Natural Language Inference for Hindi ### Languages Dataset is in Hindi ## Dataset Structure - Data is structured in TSV format. - Train and Test files are in seperate files ### Dataset Instances An example of 'train' looks as follows. ### Data Fields - Each row contatins 4 columns - Premise, Hypothesis, Label and Topic. ### Data Splits - Train : 15553 - Valid : 2581 - Test : 2593 ## Dataset Creation - We employ a recasting technique from Poliak et al. (2018a,b) to convert publicly available BBC Hindi news text classification datasets in Hindi and pose them as TE problems - In this recasting process, we build template hypotheses for each class in the label taxonomy - Then, we pair the original annotated sentence with each of the template hypotheses to create TE samples. - For more information on the recasting process, refer to paper "URL ### Source Data Source Dataset for the recasting process is the BBC Hindi Headlines Dataset(URL #### Initial Data Collection and Normalization - BBC Hindi News Classification Dataset contains 4, 335 Hindi news headlines tagged across 14 categories: India, Pakistan,news, International, entertainment, sport, science, China, learning english, social, southasia, business, institutional, multimedia - We processed this dataset to combine two sets of relevant but low prevalence classes. - Namely, we merged the samples from Pakistan, China, international, and southasia as one class called international. - Likewise, we also merged samples from news, business, social, learning english, and institutional as news. - Lastly, we also removed the class multimedia because there were very few samples. #### Who are the source language producers? Pls refer to this paper: "URL ### Annotations #### Annotation process Annotation process has been described in Dataset Creation Section. #### Who are the annotators? Annotation is done automatically. ### Personal and Sensitive Information No Personal and Sensitive Information is mentioned in the Datasets. ## Considerations for Using the Data Pls refer to this paper: URL ### Discussion of Biases Pls refer to this paper: URL ### Other Known Limitations No other known limitations ## Additional Information Pls refer to this link: URL ### Dataset Curators It is written in the repo : URL that - This corpus can be used freely for research purposes. - The paper listed below provide details of the creation and use of the corpus. If you use the corpus, then please cite the paper. - If interested in commercial use of the corpus, send email to midas@URL. - If you use the corpus in a product or application, then please credit the authors and Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi appropriately. Also, if you send us an email, we will be thrilled to know about how you have used the corpus. - Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India disclaims any responsibility for the use of the corpus and does not provide technical support. However, the contact listed above will be happy to respond to queries and clarifications. - Rather than redistributing the corpus, please direct interested parties to this page - Please feel free to send us an email: - with feedback regarding the corpus. - with information on how you have used the corpus. - if interested in having us analyze your data for natural language inference. - if interested in a collaborative research project. ### Licensing Information Copyright (C) 2019 Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi (MIDAS, IIIT-Delhi). Pls contact authors for any information on the dataset. ### Contributions Thanks to @avinsit123 for adding this dataset.
[ "# Dataset Card for BBC Hindi NLI Dataset", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Repository: GitHub\n- Paper: Aclweb\n- Point of Contact: GitHub", "### Dataset Summary\n\n- Dataset for Natural Language Inference in Hindi Language. BBC Hindi Dataset consists of textual-entailment pairs.\n- Each row of the Datasets if made up of 4 columns - Premise, Hypothesis, Label and Topic.\n- Context and Hypothesis is written in Hindi while Entailment_Label is in English.\n- Entailment_label is of 2 types - entailed and not-entailed.\n- Dataset can be used to train models for Natural Language Inference tasks in Hindi Language.", "### Supported Tasks and Leaderboards\n\n- Natural Language Inference for Hindi", "### Languages\n\nDataset is in Hindi", "## Dataset Structure\n\n- Data is structured in TSV format. \n- Train and Test files are in seperate files", "### Dataset Instances\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n- Each row contatins 4 columns - Premise, Hypothesis, Label and Topic.", "### Data Splits\n\n- Train : 15553\n- Valid : 2581\n- Test : 2593", "## Dataset Creation\n\n- We employ a recasting technique from Poliak et al. (2018a,b) to convert publicly available BBC Hindi news text classification datasets in Hindi and pose them as TE problems\n- In this recasting process, we build template hypotheses for each class in the label taxonomy\n- Then, we pair the original annotated sentence with each of the template hypotheses to create TE samples.\n- For more information on the recasting process, refer to paper \"URL", "### Source Data\n\nSource Dataset for the recasting process is the BBC Hindi Headlines Dataset(URL", "#### Initial Data Collection and Normalization\n\n- BBC Hindi News Classification Dataset contains 4, 335 Hindi news headlines tagged across 14 categories: India, Pakistan,news, International, entertainment, sport, science, China, learning english, social, southasia, business, institutional, multimedia\n- We processed this dataset to combine two sets of relevant but low prevalence classes.\n- Namely, we merged the samples from Pakistan, China, international, and southasia as one class called international.\n- Likewise, we also merged samples from news, business, social, learning english, and institutional as news.\n- Lastly, we also removed the class multimedia because there were very few samples.", "#### Who are the source language producers?\n\nPls refer to this paper: \"URL", "### Annotations", "#### Annotation process\n\nAnnotation process has been described in Dataset Creation Section.", "#### Who are the annotators?\n\nAnnotation is done automatically.", "### Personal and Sensitive Information\n\nNo Personal and Sensitive Information is mentioned in the Datasets.", "## Considerations for Using the Data\n\nPls refer to this paper: URL", "### Discussion of Biases\n\nPls refer to this paper: URL", "### Other Known Limitations\n\nNo other known limitations", "## Additional Information\n\nPls refer to this link: URL", "### Dataset Curators\n\nIt is written in the repo : URL that \n- This corpus can be used freely for research purposes.\n- The paper listed below provide details of the creation and use of the corpus. If you use the corpus, then please cite the paper.\n- If interested in commercial use of the corpus, send email to midas@URL.\n- If you use the corpus in a product or application, then please credit the authors and Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi appropriately. Also, if you send us an email, we will be thrilled to know about how you have used the corpus.\n- Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India disclaims any responsibility for the use of the corpus and does not provide technical support. However, the contact listed above will be happy to respond to queries and clarifications.\n- Rather than redistributing the corpus, please direct interested parties to this page\n- Please feel free to send us an email:\n - with feedback regarding the corpus.\n - with information on how you have used the corpus.\n - if interested in having us analyze your data for natural language inference.\n - if interested in a collaborative research project.", "### Licensing Information\n\nCopyright (C) 2019 Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi (MIDAS, IIIT-Delhi).\nPls contact authors for any information on the dataset.", "### Contributions\n\nThanks to @avinsit123 for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-natural-language-inference #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|bbc__hindi_news_classification #language-Hindi #license-mit #region-us \n", "# Dataset Card for BBC Hindi NLI Dataset", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Repository: GitHub\n- Paper: Aclweb\n- Point of Contact: GitHub", "### Dataset Summary\n\n- Dataset for Natural Language Inference in Hindi Language. BBC Hindi Dataset consists of textual-entailment pairs.\n- Each row of the Datasets if made up of 4 columns - Premise, Hypothesis, Label and Topic.\n- Context and Hypothesis is written in Hindi while Entailment_Label is in English.\n- Entailment_label is of 2 types - entailed and not-entailed.\n- Dataset can be used to train models for Natural Language Inference tasks in Hindi Language.", "### Supported Tasks and Leaderboards\n\n- Natural Language Inference for Hindi", "### Languages\n\nDataset is in Hindi", "## Dataset Structure\n\n- Data is structured in TSV format. \n- Train and Test files are in seperate files", "### Dataset Instances\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n- Each row contatins 4 columns - Premise, Hypothesis, Label and Topic.", "### Data Splits\n\n- Train : 15553\n- Valid : 2581\n- Test : 2593", "## Dataset Creation\n\n- We employ a recasting technique from Poliak et al. (2018a,b) to convert publicly available BBC Hindi news text classification datasets in Hindi and pose them as TE problems\n- In this recasting process, we build template hypotheses for each class in the label taxonomy\n- Then, we pair the original annotated sentence with each of the template hypotheses to create TE samples.\n- For more information on the recasting process, refer to paper \"URL", "### Source Data\n\nSource Dataset for the recasting process is the BBC Hindi Headlines Dataset(URL", "#### Initial Data Collection and Normalization\n\n- BBC Hindi News Classification Dataset contains 4, 335 Hindi news headlines tagged across 14 categories: India, Pakistan,news, International, entertainment, sport, science, China, learning english, social, southasia, business, institutional, multimedia\n- We processed this dataset to combine two sets of relevant but low prevalence classes.\n- Namely, we merged the samples from Pakistan, China, international, and southasia as one class called international.\n- Likewise, we also merged samples from news, business, social, learning english, and institutional as news.\n- Lastly, we also removed the class multimedia because there were very few samples.", "#### Who are the source language producers?\n\nPls refer to this paper: \"URL", "### Annotations", "#### Annotation process\n\nAnnotation process has been described in Dataset Creation Section.", "#### Who are the annotators?\n\nAnnotation is done automatically.", "### Personal and Sensitive Information\n\nNo Personal and Sensitive Information is mentioned in the Datasets.", "## Considerations for Using the Data\n\nPls refer to this paper: URL", "### Discussion of Biases\n\nPls refer to this paper: URL", "### Other Known Limitations\n\nNo other known limitations", "## Additional Information\n\nPls refer to this link: URL", "### Dataset Curators\n\nIt is written in the repo : URL that \n- This corpus can be used freely for research purposes.\n- The paper listed below provide details of the creation and use of the corpus. If you use the corpus, then please cite the paper.\n- If interested in commercial use of the corpus, send email to midas@URL.\n- If you use the corpus in a product or application, then please credit the authors and Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi appropriately. Also, if you send us an email, we will be thrilled to know about how you have used the corpus.\n- Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India disclaims any responsibility for the use of the corpus and does not provide technical support. However, the contact listed above will be happy to respond to queries and clarifications.\n- Rather than redistributing the corpus, please direct interested parties to this page\n- Please feel free to send us an email:\n - with feedback regarding the corpus.\n - with information on how you have used the corpus.\n - if interested in having us analyze your data for natural language inference.\n - if interested in a collaborative research project.", "### Licensing Information\n\nCopyright (C) 2019 Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi (MIDAS, IIIT-Delhi).\nPls contact authors for any information on the dataset.", "### Contributions\n\nThanks to @avinsit123 for adding this dataset." ]
[ 100, 11, 120, 26, 128, 18, 9, 27, 19, 28, 21, 109, 23, 154, 19, 5, 18, 15, 23, 16, 16, 12, 13, 269, 53, 18 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-natural-language-inference #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|bbc__hindi_news_classification #language-Hindi #license-mit #region-us \n# Dataset Card for BBC Hindi NLI Dataset## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Repository: GitHub\n- Paper: Aclweb\n- Point of Contact: GitHub### Dataset Summary\n\n- Dataset for Natural Language Inference in Hindi Language. BBC Hindi Dataset consists of textual-entailment pairs.\n- Each row of the Datasets if made up of 4 columns - Premise, Hypothesis, Label and Topic.\n- Context and Hypothesis is written in Hindi while Entailment_Label is in English.\n- Entailment_label is of 2 types - entailed and not-entailed.\n- Dataset can be used to train models for Natural Language Inference tasks in Hindi Language.### Supported Tasks and Leaderboards\n\n- Natural Language Inference for Hindi### Languages\n\nDataset is in Hindi## Dataset Structure\n\n- Data is structured in TSV format. \n- Train and Test files are in seperate files### Dataset Instances\n\nAn example of 'train' looks as follows.### Data Fields\n\n- Each row contatins 4 columns - Premise, Hypothesis, Label and Topic.### Data Splits\n\n- Train : 15553\n- Valid : 2581\n- Test : 2593", "passage: ## Dataset Creation\n\n- We employ a recasting technique from Poliak et al. (2018a,b) to convert publicly available BBC Hindi news text classification datasets in Hindi and pose them as TE problems\n- In this recasting process, we build template hypotheses for each class in the label taxonomy\n- Then, we pair the original annotated sentence with each of the template hypotheses to create TE samples.\n- For more information on the recasting process, refer to paper \"URL### Source Data\n\nSource Dataset for the recasting process is the BBC Hindi Headlines Dataset(URL#### Initial Data Collection and Normalization\n\n- BBC Hindi News Classification Dataset contains 4, 335 Hindi news headlines tagged across 14 categories: India, Pakistan,news, International, entertainment, sport, science, China, learning english, social, southasia, business, institutional, multimedia\n- We processed this dataset to combine two sets of relevant but low prevalence classes.\n- Namely, we merged the samples from Pakistan, China, international, and southasia as one class called international.\n- Likewise, we also merged samples from news, business, social, learning english, and institutional as news.\n- Lastly, we also removed the class multimedia because there were very few samples.#### Who are the source language producers?\n\nPls refer to this paper: \"URL### Annotations#### Annotation process\n\nAnnotation process has been described in Dataset Creation Section.#### Who are the annotators?\n\nAnnotation is done automatically.### Personal and Sensitive Information\n\nNo Personal and Sensitive Information is mentioned in the Datasets.## Considerations for Using the Data\n\nPls refer to this paper: URL### Discussion of Biases\n\nPls refer to this paper: URL### Other Known Limitations\n\nNo other known limitations## Additional Information\n\nPls refer to this link: URL" ]
dc0640510665bb3de7c88416ede4708cf6481b61
# Dataset Card for bc2gm_corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Github](https://github.com/spyysalo/bc2gm-corpus/) - **Repository:** [Github](https://github.com/spyysalo/bc2gm-corpus/) - **Paper:** [NCBI](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2559986/) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - `id`: Sentence identifier. - `tokens`: Array of tokens composing a sentence. - `ner_tags`: Array of tags, where `0` indicates no disease mentioned, `1` signals the first token of a disease and `2` the subsequent disease tokens. ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@mahajandiwakar](https://github.com/mahajandiwakar) for adding this dataset.
bc2gm_corpus
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "Bc2GmCorpus", "dataset_info": {"config_name": "bc2gm_corpus", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-GENE", "2": "I-GENE"}}}}], "splits": [{"name": "train", "num_bytes": 6095123, "num_examples": 12500}, {"name": "validation", "num_bytes": 1215919, "num_examples": 2500}, {"name": "test", "num_bytes": 2454589, "num_examples": 5000}], "download_size": 2154630, "dataset_size": 9765631}, "configs": [{"config_name": "bc2gm_corpus", "data_files": [{"split": "train", "path": "bc2gm_corpus/train-*"}, {"split": "validation", "path": "bc2gm_corpus/validation-*"}, {"split": "test", "path": "bc2gm_corpus/test-*"}], "default": true}]}
2024-01-10T10:03:04+00:00
[]
[ "en" ]
TAGS #task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #region-us
# Dataset Card for bc2gm_corpus ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: Github - Repository: Github - Paper: NCBI - Leaderboard: - Point of Contact: ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields - 'id': Sentence identifier. - 'tokens': Array of tokens composing a sentence. - 'ner_tags': Array of tags, where '0' indicates no disease mentioned, '1' signals the first token of a disease and '2' the subsequent disease tokens. ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @mahajandiwakar for adding this dataset.
[ "# Dataset Card for bc2gm_corpus", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Github\n- Repository: Github\n- Paper: NCBI\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- 'id': Sentence identifier. \n- 'tokens': Array of tokens composing a sentence. \n- 'ner_tags': Array of tags, where '0' indicates no disease mentioned, '1' signals the first token of a disease and '2' the subsequent disease tokens.", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @mahajandiwakar for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #region-us \n", "# Dataset Card for bc2gm_corpus", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Github\n- Repository: Github\n- Paper: NCBI\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- 'id': Sentence identifier. \n- 'tokens': Array of tokens composing a sentence. \n- 'ner_tags': Array of tags, where '0' indicates no disease mentioned, '1' signals the first token of a disease and '2' the subsequent disease tokens.", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @mahajandiwakar for adding this dataset." ]
[ 96, 13, 120, 32, 6, 10, 4, 6, 6, 77, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 18 ]
[ "passage: TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #region-us \n# Dataset Card for bc2gm_corpus## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: Github\n- Repository: Github\n- Paper: NCBI\n- Leaderboard:\n- Point of Contact:### Dataset Summary### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields\n\n- 'id': Sentence identifier. \n- 'tokens': Array of tokens composing a sentence. \n- 'ner_tags': Array of tags, where '0' indicates no disease mentioned, '1' signals the first token of a disease and '2' the subsequent disease tokens.### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions\n\nThanks to @mahajandiwakar for adding this dataset." ]
27aa014ce09b193e1a6f58112d4a66e0eddb69c5
# Dataset Card for Beans ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Beans Homepage](https://github.com/AI-Lab-Makerere/ibean/) - **Repository:** [AI-Lab-Makerere/ibean](https://github.com/AI-Lab-Makerere/ibean/) - **Paper:** N/A - **Leaderboard:** N/A - **Point of Contact:** N/A ### Dataset Summary Beans leaf dataset with images of diseased and health leaves. ### Supported Tasks and Leaderboards - `image-classification`: Based on a leaf image, the goal of this task is to predict the disease type (Angular Leaf Spot and Bean Rust), if any. ### Languages English ## Dataset Structure ### Data Instances A sample from the training set is provided below: ``` { 'image_file_path': '/root/.cache/huggingface/datasets/downloads/extracted/0aaa78294d4bf5114f58547e48d91b7826649919505379a167decb629aa92b0a/train/bean_rust/bean_rust_train.109.jpg', 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x500 at 0x16BAA72A4A8>, 'labels': 1 } ``` ### Data Fields The data instances have the following fields: - `image_file_path`: a `string` filepath to an image. - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`. - `labels`: an `int` classification label. Class Label Mappings: ```json { "angular_leaf_spot": 0, "bean_rust": 1, "healthy": 2, } ``` ### Data Splits | |train|validation|test| |-------------|----:|---------:|---:| |# of examples|1034 |133 |128 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @ONLINE {beansdata, author="Makerere AI Lab", title="Bean disease dataset", month="January", year="2020", url="https://github.com/AI-Lab-Makerere/ibean/" } ``` ### Contributions Thanks to [@nateraw](https://github.com/nateraw) for adding this dataset.
beans
[ "task_categories:image-classification", "task_ids:multi-class-image-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:mit", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["image-classification"], "task_ids": ["multi-class-image-classification"], "pretty_name": "Beans", "dataset_info": {"features": [{"name": "image_file_path", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "labels", "dtype": {"class_label": {"names": {"0": "angular_leaf_spot", "1": "bean_rust", "2": "healthy"}}}}], "splits": [{"name": "train", "num_bytes": 143762054.662, "num_examples": 1034}, {"name": "validation", "num_bytes": 18515527.0, "num_examples": 133}, {"name": "test", "num_bytes": 17720308.0, "num_examples": 128}], "download_size": 179978834, "dataset_size": 179997889.662}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}]}
2024-01-03T12:06:51+00:00
[]
[ "en" ]
TAGS #task_categories-image-classification #task_ids-multi-class-image-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-mit #region-us
Dataset Card for Beans ====================== Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: Beans Homepage * Repository: AI-Lab-Makerere/ibean * Paper: N/A * Leaderboard: N/A * Point of Contact: N/A ### Dataset Summary Beans leaf dataset with images of diseased and health leaves. ### Supported Tasks and Leaderboards * 'image-classification': Based on a leaf image, the goal of this task is to predict the disease type (Angular Leaf Spot and Bean Rust), if any. ### Languages English Dataset Structure ----------------- ### Data Instances A sample from the training set is provided below: ### Data Fields The data instances have the following fields: * 'image\_file\_path': a 'string' filepath to an image. * 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0]["image"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '"image"' column, *i.e.* 'dataset[0]["image"]' should always be preferred over 'dataset["image"][0]'. * 'labels': an 'int' classification label. Class Label Mappings: ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @nateraw for adding this dataset.
[ "### Dataset Summary\n\n\nBeans leaf dataset with images of diseased and health leaves.", "### Supported Tasks and Leaderboards\n\n\n* 'image-classification': Based on a leaf image, the goal of this task is to predict the disease type (Angular Leaf Spot and Bean Rust), if any.", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from the training set is provided below:", "### Data Fields\n\n\nThe data instances have the following fields:\n\n\n* 'image\\_file\\_path': a 'string' filepath to an image.\n* 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'.\n* 'labels': an 'int' classification label.\n\n\nClass Label Mappings:", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @nateraw for adding this dataset." ]
[ "TAGS\n#task_categories-image-classification #task_ids-multi-class-image-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-mit #region-us \n", "### Dataset Summary\n\n\nBeans leaf dataset with images of diseased and health leaves.", "### Supported Tasks and Leaderboards\n\n\n* 'image-classification': Based on a leaf image, the goal of this task is to predict the disease type (Angular Leaf Spot and Bean Rust), if any.", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from the training set is provided below:", "### Data Fields\n\n\nThe data instances have the following fields:\n\n\n* 'image\\_file\\_path': a 'string' filepath to an image.\n* 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'.\n* 'labels': an 'int' classification label.\n\n\nClass Label Mappings:", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @nateraw for adding this dataset." ]
[ 92, 22, 50, 12, 16, 182, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 17 ]
[ "passage: TAGS\n#task_categories-image-classification #task_ids-multi-class-image-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-mit #region-us \n### Dataset Summary\n\n\nBeans leaf dataset with images of diseased and health leaves.### Supported Tasks and Leaderboards\n\n\n* 'image-classification': Based on a leaf image, the goal of this task is to predict the disease type (Angular Leaf Spot and Bean Rust), if any.### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nA sample from the training set is provided below:### Data Fields\n\n\nThe data instances have the following fields:\n\n\n* 'image\\_file\\_path': a 'string' filepath to an image.\n* 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'.\n* 'labels': an 'int' classification label.\n\n\nClass Label Mappings:### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators### Licensing Information" ]
685fffc4105dda00888f127d586c378bf6fa995e
# Dataset Card for `best2009` ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://aiforthai.in.th/ - **Repository:** https://aiforthai.in.th/corpus.php - **Paper:** - **Leaderboard:** - **Point of Contact:** https://aiforthai.in.th/ ### Dataset Summary `best2009` is a Thai word-tokenization dataset from encyclopedia, novels, news and articles by [NECTEC](https://www.nectec.or.th/) (148,995/2,252 lines of train/test). It was created for [BEST 2010: Word Tokenization Competition](https://thailang.nectec.or.th/archive/indexa290.html?q=node/10). The test set answers are not provided publicly. ### Supported Tasks and Leaderboards word tokenization ### Languages Thai ## Dataset Structure ### Data Instances ``` {'char': ['?', 'ภ', 'ู', 'ม', 'ิ', 'ป', 'ั', 'ญ', 'ญ', 'า', 'ช', 'า', 'ว', 'บ', '้', 'า', 'น', '\n'], 'char_type': [4, 1, 10, 1, 10, 1, 4, 1, 1, 10, 1, 10, 1, 1, 9, 10, 1, 4], 'fname': 'encyclopedia_00031.txt', 'is_beginning': [1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1]} {'char': ['ภ', 'ู', 'ม', 'ิ', 'ป', 'ั', 'ญ', 'ญ', 'า', 'ช', 'า', 'ว', 'บ', '้', 'า', 'น', ' ', 'ห', 'ม', 'า', 'ย', 'ถ', 'ึ', 'ง', ' ', 'ค', 'ว', 'า', 'ม', 'ร', 'ู', '้', 'ข', 'อ', 'ง', 'ช', 'า', 'ว', 'บ', '้', 'า', 'น', ' ', 'ซ', 'ึ', '่', 'ง', 'เ', 'ร', 'ี', 'ย', 'น', 'ร', 'ู', '้', 'ม', 'า', 'จ', 'า', 'ก', 'พ', '่', 'อ', 'แ', 'ม', '่', ' ', 'ป', 'ู', '่', 'ย', '่', 'า', 'ต', 'า', 'ย', 'า', 'ย', ' ', 'ญ', 'า', 'ต', 'ิ', 'พ', 'ี', '่', 'น', '้', 'อ', 'ง', ' ', 'ห', 'ร', 'ื', 'อ', 'ผ', 'ู', '้', 'ม', 'ี', 'ค', 'ว', 'า', 'ม', 'ร', 'ู', '้', 'ใ', 'น', 'ห', 'ม', 'ู', '่', 'บ', '้', 'า', 'น', 'ใ', 'น', 'ท', '้', 'อ', 'ง', 'ถ', 'ิ', '่', 'น', 'ต', '่', 'า', 'ง', 'ๆ', '\n'], 'char_type': [1, 10, 1, 10, 1, 4, 1, 1, 10, 1, 10, 1, 1, 9, 10, 1, 5, 3, 1, 10, 1, 1, 10, 1, 5, 1, 1, 10, 1, 1, 10, 9, 1, 1, 1, 1, 10, 1, 1, 9, 10, 1, 5, 1, 10, 9, 1, 11, 1, 10, 1, 1, 1, 10, 9, 1, 10, 1, 10, 1, 1, 9, 1, 11, 1, 9, 5, 1, 10, 9, 1, 9, 10, 1, 10, 1, 10, 1, 5, 1, 10, 1, 10, 1, 10, 9, 1, 9, 1, 1, 5, 3, 1, 10, 1, 3, 10, 9, 1, 10, 1, 1, 10, 1, 1, 10, 9, 11, 1, 3, 1, 10, 9, 1, 9, 10, 1, 11, 1, 1, 9, 1, 1, 1, 10, 9, 1, 1, 9, 10, 1, 7, 4], 'fname': 'encyclopedia_00031.txt', 'is_beginning': [1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1]} ``` ### Data Fields - `fname`: file name; also marks if article is articles, news, encyclopedia or novels - `char`: characters - `char_type`: character types as adopted from []() by [deepcut](https://github.com/rkcosmos/deepcut) - `is_beginning`: is beginning of word ### Data Splits | | train | test | |-------------------------|------------|---------| | # lines | 148,995 | 2,252 | | avg words per line | 39.05 | NA | | total words | 5,818,521 | NA | | avg characters per line | 140.39 | 202.79 | | total characters | 20,918,132 | 456,684 | | # lines articles | 16,990 | NA | | # lines encyclopedia | 50,631 | NA | | # lines novels | 50,140 | NA | | # lines news | 31,234 | NA | ## Dataset Creation ### Curation Rationale The dataset was created for [BEST 2010: Word Tokenization Competition](https://thailang.nectec.or.th/archive/indexa290.html?q=node/10) by [NECTEC](https://www.nectec.or.th/). ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? Respective authors of the articles, news, encyclopedia and novels ### Annotations #### Annotation process Detailed annotation guidelines can be found in `BEST_Guideline_Release1.pdf` as part of the uncompressed files. Word tokenization standard used was [InterBEST2009](http://hltshare.fbk.eu/IWSLT2015/InterBEST2009Guidelines-2.pdf) #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information All data are curated from public sources. No personal and sensitive information is expected to be included. ## Considerations for Using the Data ### Social Impact of Dataset - word tokenization dataset from articles, news, encyclopedia and novels ### Discussion of Biases - texts are relatively formal ones from articles, news, encyclopedia and novels. - word tokenization standard used was [InterBEST2009](http://hltshare.fbk.eu/IWSLT2015/InterBEST2009Guidelines-2.pdf). ### Other Known Limitations - some tags unrelated to word tokenization (`<NE>` and `<AB>`) are cleaned out. - no word boundary provdied for the test set ## Additional Information ### Dataset Curators [NECTEC](https://www.nectec.or.th/) ### Licensing Information CC-BY-NC-SA 3.0 ### Citation Information Dataset: ``` @inproceedings{kosawat2009best, title={BEST 2009: Thai word segmentation software contest}, author={Kosawat, Krit and Boriboon, Monthika and Chootrakool, Patcharika and Chotimongkol, Ananlada and Klaithin, Supon and Kongyoung, Sarawoot and Kriengket, Kanyanut and Phaholphinyo, Sitthaa and Purodakananda, Sumonmas and Thanakulwarapas, Tipraporn and others}, booktitle={2009 Eighth International Symposium on Natural Language Processing}, pages={83--88}, year={2009}, organization={IEEE} } @inproceedings{boriboon2009best, title={Best corpus development and analysis}, author={Boriboon, Monthika and Kriengket, Kanyanut and Chootrakool, Patcharika and Phaholphinyo, Sitthaa and Purodakananda, Sumonmas and Thanakulwarapas, Tipraporn and Kosawat, Krit}, booktitle={2009 International Conference on Asian Language Processing}, pages={322--327}, year={2009}, organization={IEEE} } ``` Character type features: ``` @inproceedings{haruechaiyasak2009tlex, title={TLex: Thai lexeme analyser based on the conditional random fields}, author={Haruechaiyasak, Choochart and Kongyoung, Sarawoot}, booktitle={Proceedings of 8th International Symposium on Natural Language Processing}, year={2009} } ``` ### Contributions Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
best2009
[ "task_categories:token-classification", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:th", "license:cc-by-nc-sa-3.0", "word-tokenization", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["th"], "license": ["cc-by-nc-sa-3.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": [], "pretty_name": "best2009", "tags": ["word-tokenization"], "dataset_info": {"config_name": "best2009", "features": [{"name": "fname", "dtype": "string"}, {"name": "char", "sequence": "string"}, {"name": "char_type", "sequence": {"class_label": {"names": {"0": "b_e", "1": "c", "2": "d", "3": "n", "4": "o", "5": "p", "6": "q", "7": "s", "8": "s_e", "9": "t", "10": "v", "11": "w"}}}}, {"name": "is_beginning", "sequence": {"class_label": {"names": {"0": "neg", "1": "pos"}}}}], "splits": [{"name": "train", "num_bytes": 483129698, "num_examples": 148995}, {"name": "test", "num_bytes": 10498706, "num_examples": 2252}], "download_size": 28084787, "dataset_size": 493628404}, "configs": [{"config_name": "best2009", "data_files": [{"split": "train", "path": "best2009/train-*"}, {"split": "test", "path": "best2009/test-*"}], "default": true}]}
2024-01-10T10:08:29+00:00
[]
[ "th" ]
TAGS #task_categories-token-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Thai #license-cc-by-nc-sa-3.0 #word-tokenization #region-us
Dataset Card for 'best2009' =========================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: * Leaderboard: * Point of Contact: URL ### Dataset Summary 'best2009' is a Thai word-tokenization dataset from encyclopedia, novels, news and articles by NECTEC (148,995/2,252 lines of train/test). It was created for BEST 2010: Word Tokenization Competition. The test set answers are not provided publicly. ### Supported Tasks and Leaderboards word tokenization ### Languages Thai Dataset Structure ----------------- ### Data Instances ### Data Fields * 'fname': file name; also marks if article is articles, news, encyclopedia or novels * 'char': characters * 'char\_type': character types as adopted from by deepcut * 'is\_beginning': is beginning of word ### Data Splits train: # lines, test: 148,995 train: avg words per line, test: 39.05 train: total words, test: 5,818,521 train: avg characters per line, test: 140.39 train: total characters, test: 20,918,132 train: # lines articles, test: 16,990 train: # lines encyclopedia, test: 50,631 train: # lines novels, test: 50,140 train: # lines news, test: 31,234 Dataset Creation ---------------- ### Curation Rationale The dataset was created for BEST 2010: Word Tokenization Competition by NECTEC. ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? Respective authors of the articles, news, encyclopedia and novels ### Annotations #### Annotation process Detailed annotation guidelines can be found in 'BEST\_Guideline\_Release1.pdf' as part of the uncompressed files. Word tokenization standard used was InterBEST2009 #### Who are the annotators? ### Personal and Sensitive Information All data are curated from public sources. No personal and sensitive information is expected to be included. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset * word tokenization dataset from articles, news, encyclopedia and novels ### Discussion of Biases * texts are relatively formal ones from articles, news, encyclopedia and novels. * word tokenization standard used was InterBEST2009. ### Other Known Limitations * some tags unrelated to word tokenization ('' and '') are cleaned out. * no word boundary provdied for the test set Additional Information ---------------------- ### Dataset Curators NECTEC ### Licensing Information CC-BY-NC-SA 3.0 Dataset: Character type features: ### Contributions Thanks to @cstorm125 for adding this dataset.
[ "### Dataset Summary\n\n\n'best2009' is a Thai word-tokenization dataset from encyclopedia, novels, news and articles by NECTEC (148,995/2,252 lines of train/test). It was created for BEST 2010: Word Tokenization Competition. The test set answers are not provided publicly.", "### Supported Tasks and Leaderboards\n\n\nword tokenization", "### Languages\n\n\nThai\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\n* 'fname': file name; also marks if article is articles, news, encyclopedia or novels\n* 'char': characters\n* 'char\\_type': character types as adopted from by deepcut\n* 'is\\_beginning': is beginning of word", "### Data Splits\n\n\ntrain: # lines, test: 148,995\ntrain: avg words per line, test: 39.05\ntrain: total words, test: 5,818,521\ntrain: avg characters per line, test: 140.39\ntrain: total characters, test: 20,918,132\ntrain: # lines articles, test: 16,990\ntrain: # lines encyclopedia, test: 50,631\ntrain: # lines novels, test: 50,140\ntrain: # lines news, test: 31,234\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe dataset was created for BEST 2010: Word Tokenization Competition by NECTEC.", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?\n\n\nRespective authors of the articles, news, encyclopedia and novels", "### Annotations", "#### Annotation process\n\n\nDetailed annotation guidelines can be found in 'BEST\\_Guideline\\_Release1.pdf' as part of the uncompressed files. Word tokenization standard used was InterBEST2009", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nAll data are curated from public sources. No personal and sensitive information is expected to be included.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\n* word tokenization dataset from articles, news, encyclopedia and novels", "### Discussion of Biases\n\n\n* texts are relatively formal ones from articles, news, encyclopedia and novels.\n* word tokenization standard used was InterBEST2009.", "### Other Known Limitations\n\n\n* some tags unrelated to word tokenization ('' and '') are cleaned out.\n* no word boundary provdied for the test set\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nNECTEC", "### Licensing Information\n\n\nCC-BY-NC-SA 3.0\n\n\nDataset:\n\n\nCharacter type features:", "### Contributions\n\n\nThanks to @cstorm125 for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Thai #license-cc-by-nc-sa-3.0 #word-tokenization #region-us \n", "### Dataset Summary\n\n\n'best2009' is a Thai word-tokenization dataset from encyclopedia, novels, news and articles by NECTEC (148,995/2,252 lines of train/test). It was created for BEST 2010: Word Tokenization Competition. The test set answers are not provided publicly.", "### Supported Tasks and Leaderboards\n\n\nword tokenization", "### Languages\n\n\nThai\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\n* 'fname': file name; also marks if article is articles, news, encyclopedia or novels\n* 'char': characters\n* 'char\\_type': character types as adopted from by deepcut\n* 'is\\_beginning': is beginning of word", "### Data Splits\n\n\ntrain: # lines, test: 148,995\ntrain: avg words per line, test: 39.05\ntrain: total words, test: 5,818,521\ntrain: avg characters per line, test: 140.39\ntrain: total characters, test: 20,918,132\ntrain: # lines articles, test: 16,990\ntrain: # lines encyclopedia, test: 50,631\ntrain: # lines novels, test: 50,140\ntrain: # lines news, test: 31,234\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe dataset was created for BEST 2010: Word Tokenization Competition by NECTEC.", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?\n\n\nRespective authors of the articles, news, encyclopedia and novels", "### Annotations", "#### Annotation process\n\n\nDetailed annotation guidelines can be found in 'BEST\\_Guideline\\_Release1.pdf' as part of the uncompressed files. Word tokenization standard used was InterBEST2009", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nAll data are curated from public sources. No personal and sensitive information is expected to be included.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\n* word tokenization dataset from articles, news, encyclopedia and novels", "### Discussion of Biases\n\n\n* texts are relatively formal ones from articles, news, encyclopedia and novels.\n* word tokenization standard used was InterBEST2009.", "### Other Known Limitations\n\n\n* some tags unrelated to word tokenization ('' and '') are cleaned out.\n* no word boundary provdied for the test set\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nNECTEC", "### Licensing Information\n\n\nCC-BY-NC-SA 3.0\n\n\nDataset:\n\n\nCharacter type features:", "### Contributions\n\n\nThanks to @cstorm125 for adding this dataset." ]
[ 91, 72, 14, 12, 6, 66, 119, 26, 4, 10, 25, 5, 52, 9, 38, 24, 40, 47, 9, 23, 17 ]
[ "passage: TAGS\n#task_categories-token-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Thai #license-cc-by-nc-sa-3.0 #word-tokenization #region-us \n### Dataset Summary\n\n\n'best2009' is a Thai word-tokenization dataset from encyclopedia, novels, news and articles by NECTEC (148,995/2,252 lines of train/test). It was created for BEST 2010: Word Tokenization Competition. The test set answers are not provided publicly.### Supported Tasks and Leaderboards\n\n\nword tokenization### Languages\n\n\nThai\n\n\nDataset Structure\n-----------------### Data Instances### Data Fields\n\n\n* 'fname': file name; also marks if article is articles, news, encyclopedia or novels\n* 'char': characters\n* 'char\\_type': character types as adopted from by deepcut\n* 'is\\_beginning': is beginning of word### Data Splits\n\n\ntrain: # lines, test: 148,995\ntrain: avg words per line, test: 39.05\ntrain: total words, test: 5,818,521\ntrain: avg characters per line, test: 140.39\ntrain: total characters, test: 20,918,132\ntrain: # lines articles, test: 16,990\ntrain: # lines encyclopedia, test: 50,631\ntrain: # lines novels, test: 50,140\ntrain: # lines news, test: 31,234\n\n\nDataset Creation\n----------------### Curation Rationale\n\n\nThe dataset was created for BEST 2010: Word Tokenization Competition by NECTEC.### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?\n\n\nRespective authors of the articles, news, encyclopedia and novels### Annotations#### Annotation process\n\n\nDetailed annotation guidelines can be found in 'BEST\\_Guideline\\_Release1.pdf' as part of the uncompressed files. Word tokenization standard used was InterBEST2009" ]
2103df6b09cfdc8b9155ca6ec5d0a5b318cdab6c
# Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://opus.nlpl.eu/Bianet/corpus/version/Bianet - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** http://lrec-conf.org/workshops/lrec2018/W19/summaries/6_W19.html - **Paper:** https://arxiv.org/abs/1805.05095 - **Leaderboard:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Dataset Summary A parallel news corpus in Turkish, Kurdish and English; Bianet collects 3,214 Turkish articles with their sentence-aligned Kurdish or English translations from the Bianet online newspaper. 3 languages, 3 bitexts total number of files: 6 total number of tokens: 2.25M total number of sentence fragments: 0.14M ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information This corpus is distributed under the CC-BY-SA-4.0 open license. ### Citation Information ``` @InProceedings{ATAMAN18.6, author = {Duygu Ataman}, title = {Bianet: A Parallel News Corpus in Turkish, Kurdish and English}, booktitle = {Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)}, year = {2018}, month = {may}, date = {7-12}, location = {Miyazaki, Japan}, editor = {Jinhua Du and Mihael Arcan and Qun Liu and Hitoshi Isahara}, publisher = {European Language Resources Association (ELRA)}, address = {Paris, France}, isbn = {979-10-95546-15-3}, language = {english} } ``` ### Contributions Thanks to [@param087](https://github.com/param087) for adding this dataset.
bianet
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:translation", "size_categories:10K<n<100K", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "language:ku", "language:tr", "license:cc-by-sa-4.0", "arxiv:1805.05095", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en", "ku", "tr"], "license": "cc-by-sa-4.0", "multilinguality": ["translation"], "size_categories": ["10K<n<100K", "1K<n<10K"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "paperswithcode_id": "bianet", "pretty_name": "Bianet", "config_names": ["en-to-ku", "en-to-tr", "en_to_ku", "en_to_tr", "ku-to-tr", "ku_to_tr"], "dataset_info": [{"config_name": "en_to_ku", "features": [{"name": "id", "dtype": "string"}, {"name": "translation", "dtype": {"translation": {"languages": ["en", "ku"]}}}], "splits": [{"name": "train", "num_bytes": 1800794, "num_examples": 6402}], "download_size": 1019265, "dataset_size": 1800794}, {"config_name": "en_to_tr", "features": [{"name": "id", "dtype": "string"}, {"name": "translation", "dtype": {"translation": {"languages": ["en", "tr"]}}}], "splits": [{"name": "train", "num_bytes": 10230995, "num_examples": 34770}], "download_size": 5932117, "dataset_size": 10230995}, {"config_name": "ku_to_tr", "features": [{"name": "id", "dtype": "string"}, {"name": "translation", "dtype": {"translation": {"languages": ["ku", "tr"]}}}], "splits": [{"name": "train", "num_bytes": 2086538, "num_examples": 7325}], "download_size": 1206133, "dataset_size": 2086538}], "configs": [{"config_name": "en_to_ku", "data_files": [{"split": "train", "path": "en_to_ku/train-*"}]}, {"config_name": "en_to_tr", "data_files": [{"split": "train", "path": "en_to_tr/train-*"}]}, {"config_name": "ku_to_tr", "data_files": [{"split": "train", "path": "ku_to_tr/train-*"}]}]}
2024-02-08T14:23:25+00:00
[ "1805.05095" ]
[ "en", "ku", "tr" ]
TAGS #task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-translation #size_categories-10K<n<100K #size_categories-1K<n<10K #source_datasets-original #language-English #language-Kurdish #language-Turkish #license-cc-by-sa-4.0 #arxiv-1805.05095 #region-us
# Dataset Card for [Dataset Name] ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: URL - Paper: URL - Leaderboard: - Point of Contact: ### Dataset Summary A parallel news corpus in Turkish, Kurdish and English; Bianet collects 3,214 Turkish articles with their sentence-aligned Kurdish or English translations from the Bianet online newspaper. 3 languages, 3 bitexts total number of files: 6 total number of tokens: 2.25M total number of sentence fragments: 0.14M ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information This corpus is distributed under the CC-BY-SA-4.0 open license. ### Contributions Thanks to @param087 for adding this dataset.
[ "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nA parallel news corpus in Turkish, Kurdish and English;\nBianet collects 3,214 Turkish articles with their sentence-aligned Kurdish or English translations from the Bianet online newspaper.\n\n3 languages, 3 bitexts\ntotal number of files: 6\ntotal number of tokens: 2.25M\ntotal number of sentence fragments: 0.14M", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nThis corpus is distributed under the CC-BY-SA-4.0 open license.", "### Contributions\n\nThanks to @param087 for adding this dataset." ]
[ "TAGS\n#task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-translation #size_categories-10K<n<100K #size_categories-1K<n<10K #source_datasets-original #language-English #language-Kurdish #language-Turkish #license-cc-by-sa-4.0 #arxiv-1805.05095 #region-us \n", "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nA parallel news corpus in Turkish, Kurdish and English;\nBianet collects 3,214 Turkish articles with their sentence-aligned Kurdish or English translations from the Bianet online newspaper.\n\n3 languages, 3 bitexts\ntotal number of files: 6\ntotal number of tokens: 2.25M\ntotal number of sentence fragments: 0.14M", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nThis corpus is distributed under the CC-BY-SA-4.0 open license.", "### Contributions\n\nThanks to @param087 for adding this dataset." ]
[ 108, 10, 120, 30, 82, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 23, 18 ]
[ "passage: TAGS\n#task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-translation #size_categories-10K<n<100K #size_categories-1K<n<10K #source_datasets-original #language-English #language-Kurdish #language-Turkish #license-cc-by-sa-4.0 #arxiv-1805.05095 #region-us \n# Dataset Card for [Dataset Name]## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nA parallel news corpus in Turkish, Kurdish and English;\nBianet collects 3,214 Turkish articles with their sentence-aligned Kurdish or English translations from the Bianet online newspaper.\n\n3 languages, 3 bitexts\ntotal number of files: 6\ntotal number of tokens: 2.25M\ntotal number of sentence fragments: 0.14M### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators" ]
0a2c121b0224b552e05f281fc71c55e3180b3d00
# Dataset Card for BiblePara ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://opus.nlpl.eu/bible-uedin.php - **Repository:** None - **Paper:** https://link.springer.com/article/10.1007/s10579-014-9287-y - **Leaderboard:** [More Information Needed] - **Point of Contact:** [More Information Needed] ### Dataset Summary To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs. You can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/bible-uedin.php E.g. `dataset = load_dataset("bible_para", lang1="fi", lang2="hi")` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances Here are some examples of questions and facts: ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
bible_para
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:acu", "language:af", "language:agr", "language:ake", "language:am", "language:amu", "language:ar", "language:bg", "language:bsn", "language:cak", "language:ceb", "language:ch", "language:chq", "language:chr", "language:cjp", "language:cni", "language:cop", "language:crp", "language:cs", "language:da", "language:de", "language:dik", "language:dje", "language:djk", "language:dop", "language:ee", "language:el", "language:en", "language:eo", "language:es", "language:et", "language:eu", "language:fi", "language:fr", "language:gbi", "language:gd", "language:gu", "language:gv", "language:he", "language:hi", "language:hr", "language:hu", "language:hy", "language:id", "language:is", "language:it", "language:ja", "language:jak", "language:jiv", "language:kab", "language:kbh", "language:kek", "language:kn", "language:ko", "language:la", "language:lt", "language:lv", "language:mam", "language:mi", "language:ml", "language:mr", "language:my", "language:ne", "language:nhg", "language:nl", "language:no", "language:ojb", "language:pck", "language:pes", "language:pl", "language:plt", "language:pot", "language:ppk", "language:pt", "language:quc", "language:quw", "language:ro", "language:rom", "language:ru", "language:shi", "language:sk", "language:sl", "language:sn", "language:so", "language:sq", "language:sr", "language:ss", "language:sv", "language:syr", "language:te", "language:th", "language:tl", "language:tmh", "language:tr", "language:uk", "language:usp", "language:vi", "language:wal", "language:wo", "language:xh", "language:zh", "language:zu", "license:cc0-1.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["acu", "af", "agr", "ake", "am", "amu", "ar", "bg", "bsn", "cak", "ceb", "ch", "chq", "chr", "cjp", "cni", "cop", "crp", "cs", "da", "de", "dik", "dje", "djk", "dop", "ee", "el", "en", "eo", "es", "et", "eu", "fi", "fr", "gbi", "gd", "gu", "gv", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jak", "jiv", "kab", "kbh", "kek", "kn", "ko", "la", "lt", "lv", "mam", "mi", "ml", "mr", "my", "ne", "nhg", "nl", "no", "ojb", "pck", "pes", "pl", "plt", "pot", "ppk", "pt", "quc", "quw", "ro", "rom", "ru", "shi", "sk", "sl", "sn", "so", "sq", "sr", "ss", "sv", "syr", "te", "th", "tl", "tmh", "tr", "uk", "usp", "vi", "wal", "wo", "xh", "zh", "zu"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "BiblePara", "dataset_info": [{"config_name": "de-en", "features": [{"name": "id", "dtype": "string"}, {"name": "translation", "dtype": {"translation": {"languages": ["de", "en"]}}}], "splits": [{"name": "train", "num_bytes": 17262178, "num_examples": 62195}], "download_size": 5440713, "dataset_size": 17262178}, {"config_name": "en-fr", "features": [{"name": "id", "dtype": "string"}, {"name": "translation", "dtype": {"translation": {"languages": ["en", "fr"]}}}], "splits": [{"name": "train", "num_bytes": 17536445, "num_examples": 62195}], "download_size": 5470044, "dataset_size": 17536445}, {"config_name": "en-es", "features": [{"name": "id", "dtype": "string"}, {"name": "translation", "dtype": {"translation": {"languages": ["en", "es"]}}}], "splits": [{"name": "train", "num_bytes": 17105724, "num_examples": 62191}], "download_size": 5418998, "dataset_size": 17105724}, {"config_name": "en-fi", "features": [{"name": "id", "dtype": "string"}, {"name": "translation", "dtype": {"translation": {"languages": ["en", "fi"]}}}], "splits": [{"name": "train", "num_bytes": 17486055, "num_examples": 62026}], "download_size": 5506407, "dataset_size": 17486055}, {"config_name": "en-no", "features": [{"name": "id", "dtype": "string"}, {"name": "translation", "dtype": {"translation": {"languages": ["en", "no"]}}}], "splits": [{"name": "train", "num_bytes": 16681323, "num_examples": 62107}], "download_size": 5293164, "dataset_size": 16681323}, {"config_name": "en-hi", "features": [{"name": "id", "dtype": "string"}, {"name": "translation", "dtype": {"translation": {"languages": ["en", "hi"]}}}], "splits": [{"name": "train", "num_bytes": 27849361, "num_examples": 62073}], "download_size": 6224765, "dataset_size": 27849361}]}
2024-01-18T11:01:58+00:00
[]
[ "acu", "af", "agr", "ake", "am", "amu", "ar", "bg", "bsn", "cak", "ceb", "ch", "chq", "chr", "cjp", "cni", "cop", "crp", "cs", "da", "de", "dik", "dje", "djk", "dop", "ee", "el", "en", "eo", "es", "et", "eu", "fi", "fr", "gbi", "gd", "gu", "gv", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jak", "jiv", "kab", "kbh", "kek", "kn", "ko", "la", "lt", "lv", "mam", "mi", "ml", "mr", "my", "ne", "nhg", "nl", "no", "ojb", "pck", "pes", "pl", "plt", "pot", "ppk", "pt", "quc", "quw", "ro", "rom", "ru", "shi", "sk", "sl", "sn", "so", "sq", "sr", "ss", "sv", "syr", "te", "th", "tl", "tmh", "tr", "uk", "usp", "vi", "wal", "wo", "xh", "zh", "zu" ]
TAGS #task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-original #language-Achuar-Shiwiar #language-Afrikaans #language-Aguaruna #language-Akawaio #language-Amharic #language-Guerrero Amuzgo #language-Arabic #language-Bulgarian #language-Barasana-Eduria #language-Kaqchikel #language-Cebuano #language-Chamorro #language-Quiotepec Chinantec #language-Cherokee #language-Cabécar #language-Asháninka #language-Coptic #language-crp #language-Czech #language-Danish #language-German #language-Southwestern Dinka #language-Zarma #language-Eastern Maroon Creole #language-Lukpa #language-Ewe #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Finnish #language-French #language-Galela #language-Scottish Gaelic #language-Gujarati #language-Manx #language-Hebrew #language-Hindi #language-Croatian #language-Hungarian #language-Armenian #language-Indonesian #language-Icelandic #language-Italian #language-Japanese #language-Jakun #language-Shuar #language-Kabyle #language-Camsá #language-Kekchí #language-Kannada #language-Korean #language-Latin #language-Lithuanian #language-Latvian #language-Mam #language-Maori #language-Malayalam #language-Marathi #language-Burmese #language-Nepali (macrolanguage) #language-Tetelcingo Nahuatl #language-Dutch #language-Norwegian #language-Northwestern Ojibwa #language-Paite Chin #language-Iranian Persian #language-Polish #language-Plateau Malagasy #language-Potawatomi #language-Uma #language-Portuguese #language-K'iche' #language-Tena Lowland Quichua #language-Romanian #language-Romany #language-Russian #language-Tachelhit #language-Slovak #language-Slovenian #language-Shona #language-Somali #language-Albanian #language-Serbian #language-Swati #language-Swedish #language-Syriac #language-Telugu #language-Thai #language-Tagalog #language-Tamashek #language-Turkish #language-Ukrainian #language-Uspanteco #language-Vietnamese #language-Wolaytta #language-Wolof #language-Xhosa #language-Chinese #language-Zulu #license-cc0-1.0 #region-us
# Dataset Card for BiblePara ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: None - Paper: URL - Leaderboard: - Point of Contact: ### Dataset Summary To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs. You can find the valid pairs in Homepage section of Dataset Description: URL E.g. 'dataset = load_dataset("bible_para", lang1="fi", lang2="hi")' ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances Here are some examples of questions and facts: ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @abhishekkrthakur for adding this dataset.
[ "# Dataset Card for BiblePara", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: None\n- Paper: URL\n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nTo load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.\nYou can find the valid pairs in Homepage section of Dataset Description: URL\nE.g.\n\n'dataset = load_dataset(\"bible_para\", lang1=\"fi\", lang2=\"hi\")'", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances\n\nHere are some examples of questions and facts:", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @abhishekkrthakur for adding this dataset." ]
[ "TAGS\n#task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-original #language-Achuar-Shiwiar #language-Afrikaans #language-Aguaruna #language-Akawaio #language-Amharic #language-Guerrero Amuzgo #language-Arabic #language-Bulgarian #language-Barasana-Eduria #language-Kaqchikel #language-Cebuano #language-Chamorro #language-Quiotepec Chinantec #language-Cherokee #language-Cabécar #language-Asháninka #language-Coptic #language-crp #language-Czech #language-Danish #language-German #language-Southwestern Dinka #language-Zarma #language-Eastern Maroon Creole #language-Lukpa #language-Ewe #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Finnish #language-French #language-Galela #language-Scottish Gaelic #language-Gujarati #language-Manx #language-Hebrew #language-Hindi #language-Croatian #language-Hungarian #language-Armenian #language-Indonesian #language-Icelandic #language-Italian #language-Japanese #language-Jakun #language-Shuar #language-Kabyle #language-Camsá #language-Kekchí #language-Kannada #language-Korean #language-Latin #language-Lithuanian #language-Latvian #language-Mam #language-Maori #language-Malayalam #language-Marathi #language-Burmese #language-Nepali (macrolanguage) #language-Tetelcingo Nahuatl #language-Dutch #language-Norwegian #language-Northwestern Ojibwa #language-Paite Chin #language-Iranian Persian #language-Polish #language-Plateau Malagasy #language-Potawatomi #language-Uma #language-Portuguese #language-K'iche' #language-Tena Lowland Quichua #language-Romanian #language-Romany #language-Russian #language-Tachelhit #language-Slovak #language-Slovenian #language-Shona #language-Somali #language-Albanian #language-Serbian #language-Swati #language-Swedish #language-Syriac #language-Telugu #language-Thai #language-Tagalog #language-Tamashek #language-Turkish #language-Ukrainian #language-Uspanteco #language-Vietnamese #language-Wolaytta #language-Wolof #language-Xhosa #language-Chinese #language-Zulu #license-cc0-1.0 #region-us \n", "# Dataset Card for BiblePara", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: None\n- Paper: URL\n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nTo load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.\nYou can find the valid pairs in Homepage section of Dataset Description: URL\nE.g.\n\n'dataset = load_dataset(\"bible_para\", lang1=\"fi\", lang2=\"hi\")'", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances\n\nHere are some examples of questions and facts:", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @abhishekkrthakur for adding this dataset." ]
[ 685, 7, 120, 28, 82, 10, 4, 6, 17, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 20 ]
[ "passage: " ]
e807b1d5492aa5f4fac08f3f6c7c85c72887ca12
# Dataset Card for Big Patent ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Big Patent](https://evasharma.github.io/bigpatent/) - **Repository:** - **Paper:** [BIGPATENT: A Large-Scale Dataset for Abstractive and Coherent Summarization](https://arxiv.org/abs/1906.03741) - **Leaderboard:** - **Point of Contact:** [Lu Wang](mailto:wangluxy@umich.edu) ### Dataset Summary BIGPATENT, consisting of 1.3 million records of U.S. patent documents along with human written abstractive summaries. Each US patent application is filed under a Cooperative Patent Classification (CPC) code. There are nine such classification categories: - a: Human Necessities - b: Performing Operations; Transporting - c: Chemistry; Metallurgy - d: Textiles; Paper - e: Fixed Constructions - f: Mechanical Engineering; Lightning; Heating; Weapons; Blasting - g: Physics - h: Electricity - y: General tagging of new or cross-sectional technology Current defaults are 2.1.2 version (fix update to cased raw strings) and 'all' CPC codes: ```python from datasets import load_dataset ds = load_dataset("big_patent") # default is 'all' CPC codes ds = load_dataset("big_patent", "all") # the same as above ds = load_dataset("big_patent", "a") # only 'a' CPC codes ds = load_dataset("big_patent", codes=["a", "b"]) ``` To use 1.0.0 version (lower cased tokenized words), pass both parameters `codes` and `version`: ```python ds = load_dataset("big_patent", codes="all", version="1.0.0") ds = load_dataset("big_patent", codes="a", version="1.0.0") ds = load_dataset("big_patent", codes=["a", "b"], version="1.0.0") ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English ## Dataset Structure ### Data Instances Each instance contains a pair of `description` and `abstract`. `description` is extracted from the Description section of the Patent while `abstract` is extracted from the Abstract section. ``` { 'description': 'FIELD OF THE INVENTION \n [0001] This invention relates to novel calcium phosphate-coated implantable medical devices and processes of making same. The unique calcium-phosphate coated implantable medical devices minimize...', 'abstract': 'This invention relates to novel calcium phosphate-coated implantable medical devices...' } ``` ### Data Fields - `description`: detailed description of patent. - `abstract`: Patent abastract. ### Data Splits | | train | validation | test | |:----|------------------:|-------------:|-------:| | all | 1207222 | 67068 | 67072 | | a | 174134 | 9674 | 9675 | | b | 161520 | 8973 | 8974 | | c | 101042 | 5613 | 5614 | | d | 10164 | 565 | 565 | | e | 34443 | 1914 | 1914 | | f | 85568 | 4754 | 4754 | | g | 258935 | 14385 | 14386 | | h | 257019 | 14279 | 14279 | | y | 124397 | 6911 | 6911 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ```bibtex @article{DBLP:journals/corr/abs-1906-03741, author = {Eva Sharma and Chen Li and Lu Wang}, title = {{BIGPATENT:} {A} Large-Scale Dataset for Abstractive and Coherent Summarization}, journal = {CoRR}, volume = {abs/1906.03741}, year = {2019}, url = {http://arxiv.org/abs/1906.03741}, eprinttype = {arXiv}, eprint = {1906.03741}, timestamp = {Wed, 26 Jun 2019 07:14:58 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1906-03741.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ### Contributions Thanks to [@mattbui](https://github.com/mattbui) for adding this dataset.
big_patent
[ "task_categories:summarization", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:cc-by-4.0", "patent-summarization", "arxiv:1906.03741", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M", "10K<n<100K", "1M<n<10M"], "source_datasets": ["original"], "task_categories": ["summarization"], "task_ids": [], "paperswithcode_id": "bigpatent", "pretty_name": "Big Patent", "config_names": ["a", "all", "b", "c", "d", "e", "f", "g", "h", "y"], "tags": ["patent-summarization"], "dataset_info": [{"config_name": "all", "features": [{"name": "description", "dtype": "string"}, {"name": "abstract", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 38367048389, "num_examples": 1207222}, {"name": "validation", "num_bytes": 2115827002, "num_examples": 67068}, {"name": "test", "num_bytes": 2129505280, "num_examples": 67072}], "download_size": 10142923776, "dataset_size": 42612380671}, {"config_name": "a", "features": [{"name": "description", "dtype": "string"}, {"name": "abstract", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5683460620, "num_examples": 174134}, {"name": "validation", "num_bytes": 313324505, "num_examples": 9674}, {"name": "test", "num_bytes": 316633277, "num_examples": 9675}], "download_size": 10142923776, "dataset_size": 6313418402}, {"config_name": "b", "features": [{"name": "description", "dtype": "string"}, {"name": "abstract", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4236070976, "num_examples": 161520}, {"name": "validation", "num_bytes": 234425138, "num_examples": 8973}, {"name": "test", "num_bytes": 231538734, "num_examples": 8974}], "download_size": 10142923776, "dataset_size": 4702034848}, {"config_name": "c", "features": [{"name": "description", "dtype": "string"}, {"name": "abstract", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4506249306, "num_examples": 101042}, {"name": "validation", "num_bytes": 244684775, "num_examples": 5613}, {"name": "test", "num_bytes": 252566793, "num_examples": 5614}], "download_size": 10142923776, "dataset_size": 5003500874}, {"config_name": "d", "features": [{"name": "description", "dtype": "string"}, {"name": "abstract", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 264717412, "num_examples": 10164}, {"name": "validation", "num_bytes": 14560482, "num_examples": 565}, {"name": "test", "num_bytes": 14403430, "num_examples": 565}], "download_size": 10142923776, "dataset_size": 293681324}, {"config_name": "e", "features": [{"name": "description", "dtype": "string"}, {"name": "abstract", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 881101433, "num_examples": 34443}, {"name": "validation", "num_bytes": 48646158, "num_examples": 1914}, {"name": "test", "num_bytes": 48586429, "num_examples": 1914}], "download_size": 10142923776, "dataset_size": 978334020}, {"config_name": "f", "features": [{"name": "description", "dtype": "string"}, {"name": "abstract", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2146383473, "num_examples": 85568}, {"name": "validation", "num_bytes": 119632631, "num_examples": 4754}, {"name": "test", "num_bytes": 119596303, "num_examples": 4754}], "download_size": 10142923776, "dataset_size": 2385612407}, {"config_name": "g", "features": [{"name": "description", "dtype": "string"}, {"name": "abstract", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8877854206, "num_examples": 258935}, {"name": "validation", "num_bytes": 492581177, "num_examples": 14385}, {"name": "test", "num_bytes": 496324853, "num_examples": 14386}], "download_size": 10142923776, "dataset_size": 9866760236}, {"config_name": "h", "features": [{"name": "description", "dtype": "string"}, {"name": "abstract", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8075621958, "num_examples": 257019}, {"name": "validation", "num_bytes": 447602356, "num_examples": 14279}, {"name": "test", "num_bytes": 445460513, "num_examples": 14279}], "download_size": 10142923776, "dataset_size": 8968684827}, {"config_name": "y", "features": [{"name": "description", "dtype": "string"}, {"name": "abstract", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3695589005, "num_examples": 124397}, {"name": "validation", "num_bytes": 200369780, "num_examples": 6911}, {"name": "test", "num_bytes": 204394948, "num_examples": 6911}], "download_size": 10142923776, "dataset_size": 4100353733}]}
2024-01-18T11:01:59+00:00
[ "1906.03741" ]
[ "en" ]
TAGS #task_categories-summarization #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #size_categories-10K<n<100K #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #patent-summarization #arxiv-1906.03741 #region-us
Dataset Card for Big Patent =========================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: Big Patent * Repository: * Paper: BIGPATENT: A Large-Scale Dataset for Abstractive and Coherent Summarization * Leaderboard: * Point of Contact: Lu Wang ### Dataset Summary BIGPATENT, consisting of 1.3 million records of U.S. patent documents along with human written abstractive summaries. Each US patent application is filed under a Cooperative Patent Classification (CPC) code. There are nine such classification categories: * a: Human Necessities * b: Performing Operations; Transporting * c: Chemistry; Metallurgy * d: Textiles; Paper * e: Fixed Constructions * f: Mechanical Engineering; Lightning; Heating; Weapons; Blasting * g: Physics * h: Electricity * y: General tagging of new or cross-sectional technology Current defaults are 2.1.2 version (fix update to cased raw strings) and 'all' CPC codes: To use 1.0.0 version (lower cased tokenized words), pass both parameters 'codes' and 'version': ### Supported Tasks and Leaderboards ### Languages English Dataset Structure ----------------- ### Data Instances Each instance contains a pair of 'description' and 'abstract'. 'description' is extracted from the Description section of the Patent while 'abstract' is extracted from the Abstract section. ### Data Fields * 'description': detailed description of patent. * 'abstract': Patent abastract. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @mattbui for adding this dataset.
[ "### Dataset Summary\n\n\nBIGPATENT, consisting of 1.3 million records of U.S. patent documents along with human written abstractive summaries.\nEach US patent application is filed under a Cooperative Patent Classification (CPC) code.\nThere are nine such classification categories:\n\n\n* a: Human Necessities\n* b: Performing Operations; Transporting\n* c: Chemistry; Metallurgy\n* d: Textiles; Paper\n* e: Fixed Constructions\n* f: Mechanical Engineering; Lightning; Heating; Weapons; Blasting\n* g: Physics\n* h: Electricity\n* y: General tagging of new or cross-sectional technology\n\n\nCurrent defaults are 2.1.2 version (fix update to cased raw strings) and 'all' CPC codes:\n\n\nTo use 1.0.0 version (lower cased tokenized words), pass both parameters 'codes' and 'version':", "### Supported Tasks and Leaderboards", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nEach instance contains a pair of 'description' and 'abstract'. 'description' is extracted from the Description section of the Patent while 'abstract' is extracted from the Abstract section.", "### Data Fields\n\n\n* 'description': detailed description of patent.\n* 'abstract': Patent abastract.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @mattbui for adding this dataset." ]
[ "TAGS\n#task_categories-summarization #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #size_categories-10K<n<100K #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #patent-summarization #arxiv-1906.03741 #region-us \n", "### Dataset Summary\n\n\nBIGPATENT, consisting of 1.3 million records of U.S. patent documents along with human written abstractive summaries.\nEach US patent application is filed under a Cooperative Patent Classification (CPC) code.\nThere are nine such classification categories:\n\n\n* a: Human Necessities\n* b: Performing Operations; Transporting\n* c: Chemistry; Metallurgy\n* d: Textiles; Paper\n* e: Fixed Constructions\n* f: Mechanical Engineering; Lightning; Heating; Weapons; Blasting\n* g: Physics\n* h: Electricity\n* y: General tagging of new or cross-sectional technology\n\n\nCurrent defaults are 2.1.2 version (fix update to cased raw strings) and 'all' CPC codes:\n\n\nTo use 1.0.0 version (lower cased tokenized words), pass both parameters 'codes' and 'version':", "### Supported Tasks and Leaderboards", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nEach instance contains a pair of 'description' and 'abstract'. 'description' is extracted from the Description section of the Patent while 'abstract' is extracted from the Abstract section.", "### Data Fields\n\n\n* 'description': detailed description of patent.\n* 'abstract': Patent abastract.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @mattbui for adding this dataset." ]
[ 118, 199, 10, 12, 52, 28, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 18 ]
[ "passage: TAGS\n#task_categories-summarization #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #size_categories-10K<n<100K #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #patent-summarization #arxiv-1906.03741 #region-us \n### Dataset Summary\n\n\nBIGPATENT, consisting of 1.3 million records of U.S. patent documents along with human written abstractive summaries.\nEach US patent application is filed under a Cooperative Patent Classification (CPC) code.\nThere are nine such classification categories:\n\n\n* a: Human Necessities\n* b: Performing Operations; Transporting\n* c: Chemistry; Metallurgy\n* d: Textiles; Paper\n* e: Fixed Constructions\n* f: Mechanical Engineering; Lightning; Heating; Weapons; Blasting\n* g: Physics\n* h: Electricity\n* y: General tagging of new or cross-sectional technology\n\n\nCurrent defaults are 2.1.2 version (fix update to cased raw strings) and 'all' CPC codes:\n\n\nTo use 1.0.0 version (lower cased tokenized words), pass both parameters 'codes' and 'version':### Supported Tasks and Leaderboards### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nEach instance contains a pair of 'description' and 'abstract'. 'description' is extracted from the Description section of the Patent while 'abstract' is extracted from the Abstract section.### Data Fields\n\n\n* 'description': detailed description of patent.\n* 'abstract': Patent abastract.### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset" ]
302b413fcd5338a03411461886b0a0ba5b7fce0a
# Dataset Card for "billsum" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/FiscalNote/BillSum](https://github.com/FiscalNote/BillSum) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 67.26 MB - **Size of the generated dataset:** 272.42 MB - **Total amount of disk used:** 339.68 MB ### Dataset Summary BillSum, summarization of US Congressional and California state bills. There are several features: - text: bill text. - summary: summary of the bills. - title: title of the bills. features for us bills. ca bills does not have. - text_len: number of chars in text. - sum_len: number of chars in summary. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 67.26 MB - **Size of the generated dataset:** 272.42 MB - **Total amount of disk used:** 339.68 MB An example of 'train' looks as follows. ``` { "summary": "some summary", "text": "some text.", "title": "An act to amend Section xxx." } ``` ### Data Fields The data fields are the same among all splits. #### default - `text`: a `string` feature. - `summary`: a `string` feature. - `title`: a `string` feature. ### Data Splits | name |train|ca_test|test| |-------|----:|------:|---:| |default|18949| 1237|3269| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization The data consists of three parts: US training bills, US test bills and California test bills. The US bills were collected from the [Govinfo](https://github.com/unitedstates/congress) service provided by the United States Government Publishing Office (GPO) under CC0-1.0 license. The California, bills from the 2015-2016 session are available from the legislature’s [website](https://leginfo.legislature.ca.gov/). #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @misc{kornilova2019billsum, title={BillSum: A Corpus for Automatic Summarization of US Legislation}, author={Anastassia Kornilova and Vlad Eidelman}, year={2019}, eprint={1910.00523}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@jplu](https://github.com/jplu), [@lewtun](https://github.com/lewtun) for adding this dataset.
billsum
[ "task_categories:summarization", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc0-1.0", "bills-summarization", "arxiv:1910.00523", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["cc0-1.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["summarization"], "task_ids": [], "paperswithcode_id": "billsum", "pretty_name": "BillSum", "tags": ["bills-summarization"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "title", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 219596090, "num_examples": 18949}, {"name": "test", "num_bytes": 37866257, "num_examples": 3269}, {"name": "ca_test", "num_bytes": 14945291, "num_examples": 1237}], "download_size": 113729382, "dataset_size": 272407638}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "ca_test", "path": "data/ca_test-*"}]}], "train-eval-index": [{"config": "default", "task": "summarization", "task_id": "summarization", "splits": {"train_split": "train", "eval_split": "test"}, "col_mapping": {"text": "text", "summary": "target"}, "metrics": [{"type": "rouge", "name": "Rouge"}]}]}
2024-01-03T12:14:42+00:00
[ "1910.00523" ]
[ "en" ]
TAGS #task_categories-summarization #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc0-1.0 #bills-summarization #arxiv-1910.00523 #region-us
Dataset Card for "billsum" ========================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: * Point of Contact: * Size of downloaded dataset files: 67.26 MB * Size of the generated dataset: 272.42 MB * Total amount of disk used: 339.68 MB ### Dataset Summary BillSum, summarization of US Congressional and California state bills. There are several features: * text: bill text. * summary: summary of the bills. * title: title of the bills. features for us bills. ca bills does not have. * text\_len: number of chars in text. * sum\_len: number of chars in summary. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### default * Size of downloaded dataset files: 67.26 MB * Size of the generated dataset: 272.42 MB * Total amount of disk used: 339.68 MB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### default * 'text': a 'string' feature. * 'summary': a 'string' feature. * 'title': a 'string' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization The data consists of three parts: US training bills, US test bills and California test bills. The US bills were collected from the Govinfo service provided by the United States Government Publishing Office (GPO) under CC0-1.0 license. The California, bills from the 2015-2016 session are available from the legislature’s website. #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @thomwolf, @jplu, @lewtun for adding this dataset.
[ "### Dataset Summary\n\n\nBillSum, summarization of US Congressional and California state bills.\n\n\nThere are several features:\n\n\n* text: bill text.\n* summary: summary of the bills.\n* title: title of the bills.\nfeatures for us bills. ca bills does not have.\n* text\\_len: number of chars in text.\n* sum\\_len: number of chars in summary.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 67.26 MB\n* Size of the generated dataset: 272.42 MB\n* Total amount of disk used: 339.68 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'text': a 'string' feature.\n* 'summary': a 'string' feature.\n* 'title': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe data consists of three parts: US training bills, US test bills and California test bills. The US bills were collected from the Govinfo service provided by the United States Government Publishing Office (GPO) under CC0-1.0 license. The California, bills from the 2015-2016 session are available from the legislature’s website.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @thomwolf, @jplu, @lewtun for adding this dataset." ]
[ "TAGS\n#task_categories-summarization #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc0-1.0 #bills-summarization #arxiv-1910.00523 #region-us \n", "### Dataset Summary\n\n\nBillSum, summarization of US Congressional and California state bills.\n\n\nThere are several features:\n\n\n* text: bill text.\n* summary: summary of the bills.\n* title: title of the bills.\nfeatures for us bills. ca bills does not have.\n* text\\_len: number of chars in text.\n* sum\\_len: number of chars in summary.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 67.26 MB\n* Size of the generated dataset: 272.42 MB\n* Total amount of disk used: 339.68 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'text': a 'string' feature.\n* 'summary': a 'string' feature.\n* 'title': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe data consists of three parts: US training bills, US test bills and California test bills. The US bills were collected from the Govinfo service provided by the United States Government Publishing Office (GPO) under CC0-1.0 license. The California, bills from the 2015-2016 session are available from the legislature’s website.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @thomwolf, @jplu, @lewtun for adding this dataset." ]
[ 90, 91, 10, 11, 6, 53, 17, 37, 11, 7, 4, 84, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 25 ]
[ "passage: TAGS\n#task_categories-summarization #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc0-1.0 #bills-summarization #arxiv-1910.00523 #region-us \n### Dataset Summary\n\n\nBillSum, summarization of US Congressional and California state bills.\n\n\nThere are several features:\n\n\n* text: bill text.\n* summary: summary of the bills.\n* title: title of the bills.\nfeatures for us bills. ca bills does not have.\n* text\\_len: number of chars in text.\n* sum\\_len: number of chars in summary.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### default\n\n\n* Size of downloaded dataset files: 67.26 MB\n* Size of the generated dataset: 272.42 MB\n* Total amount of disk used: 339.68 MB\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### default\n\n\n* 'text': a 'string' feature.\n* 'summary': a 'string' feature.\n* 'title': a 'string' feature.### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization\n\n\nThe data consists of three parts: US training bills, US test bills and California test bills. The US bills were collected from the Govinfo service provided by the United States Government Publishing Office (GPO) under CC0-1.0 license. The California, bills from the 2015-2016 session are available from the legislature’s website.#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators" ]
77f70b572508c4571927c95e3b9bec64e4275d39
# Dataset Card for BingCoronavirusQuerySet ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** None - **Repository:** https://github.com/microsoft/BingCoronavirusQuerySet - **Paper:** Nonewww - **Leaderboard:** [More Information Needed] - **Point of Contact:** [More Information Needed] ### Dataset Summary Please note that you can specify the start and end date of the data. You can get start and end dates from here: https://github.com/microsoft/BingCoronavirusQuerySet/tree/master/data/2020 example: ``` load_dataset("bing_coronavirus_query_set", queries_by="state", start_date="2020-09-01", end_date="2020-09-30") ``` You can also load the data by country by using `queries_by="country"`. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
bing_coronavirus_query_set
[ "task_categories:text-classification", "task_ids:intent-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:other", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["intent-classification"], "pretty_name": "BingCoronavirusQuerySet", "dataset_info": {"config_name": "country_2020-09-01_2020-09-30", "features": [{"name": "id", "dtype": "int32"}, {"name": "Date", "dtype": "string"}, {"name": "Query", "dtype": "string"}, {"name": "IsImplicitIntent", "dtype": "string"}, {"name": "Country", "dtype": "string"}, {"name": "PopularityScore", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 22052194, "num_examples": 317856}], "download_size": 6768102, "dataset_size": 22052194}, "configs": [{"config_name": "country_2020-09-01_2020-09-30", "data_files": [{"split": "train", "path": "country_2020-09-01_2020-09-30/train-*"}], "default": true}]}
2024-01-10T10:17:05+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-intent-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-other #region-us
# Dataset Card for BingCoronavirusQuerySet ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: None - Repository: URL - Paper: Nonewww - Leaderboard: - Point of Contact: ### Dataset Summary Please note that you can specify the start and end date of the data. You can get start and end dates from here: URL example: You can also load the data by country by using 'queries_by="country"'. ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @abhishekkrthakur for adding this dataset.
[ "# Dataset Card for BingCoronavirusQuerySet", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: None\n- Repository: URL\n- Paper: Nonewww\n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nPlease note that you can specify the start and end date of the data. You can get start and end dates from here: URL\n\nexample:\n\n\n\nYou can also load the data by country by using 'queries_by=\"country\"'.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @abhishekkrthakur for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-intent-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-other #region-us \n", "# Dataset Card for BingCoronavirusQuerySet", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: None\n- Repository: URL\n- Paper: Nonewww\n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nPlease note that you can specify the start and end date of the data. You can get start and end dates from here: URL\n\nexample:\n\n\n\nYou can also load the data by country by using 'queries_by=\"country\"'.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @abhishekkrthakur for adding this dataset." ]
[ 83, 13, 120, 30, 56, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 20 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-intent-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-other #region-us \n# Dataset Card for BingCoronavirusQuerySet## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: None\n- Repository: URL\n- Paper: Nonewww\n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nPlease note that you can specify the start and end date of the data. You can get start and end dates from here: URL\n\nexample:\n\n\n\nYou can also load the data by country by using 'queries_by=\"country\"'.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions\n\nThanks to @abhishekkrthakur for adding this dataset." ]
5bb4def0bfa1570a933f18af2d8c13c22c2e2b94
# Dataset Card for "biomrc" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [http://nlp.cs.aueb.gr/](http://nlp.cs.aueb.gr/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 1.29 GB - **Size of the generated dataset:** 5.81 GB - **Total amount of disk used:** 7.09 GB ### Dataset Summary We introduce BIOMRC, a large-scale cloze-style biomedical MRC dataset. Care was taken to reduce noise, compared to the previous BIOREAD dataset of Pappas et al. (2018). Experiments show that simple heuristics do not perform well on the new dataset and that two neural MRC models that had been tested on BIOREAD perform much better on BIOMRC, indicating that the new dataset is indeed less noisy or at least that its task is more feasible. Non-expert human performance is also higher on the new dataset compared to BIOREAD, and biomedical experts perform even better. We also introduce a new BERT-based MRC model, the best version of which substantially outperforms all other methods tested, reaching or surpassing the accuracy of biomedical experts in some experiments. We make the new dataset available in three different sizes, also releasing our code, and providing a leaderboard. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### biomrc_large_A - **Size of downloaded dataset files:** 408.08 MB - **Size of the generated dataset:** 1.92 GB - **Total amount of disk used:** 2.33 GB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "abstract": "\"OBJECTIVES: @entity9 is a @entity10 that may result from greater occipital nerve entrapment. Entrapped peripheral nerves typica...", "answer": "@entity9 :: (MESH:D009437,Disease) :: ['unilateral occipital neuralgia']\n", "entities_list": ["@entity1 :: ('9606', 'Species') :: ['patients']", "@entity10 :: ('MESH:D006261', 'Disease') :: ['headache', 'Headache']", "@entity9 :: ('MESH:D009437', 'Disease') :: ['Occipital neuralgia', 'unilateral occipital neuralgia']"], "title": "Sonographic evaluation of the greater occipital nerve in XXXX .\n" } ``` #### biomrc_large_B - **Size of downloaded dataset files:** 343.06 MB - **Size of the generated dataset:** 1.54 GB - **Total amount of disk used:** 1.88 GB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "abstract": "\"BACKGROUND: Adults with physical disabilities are less likely than others to receive @entity2 screening. It is not known, howev...", "answer": "@entity2", "entities_list": ["@entity2", "@entity1", "@entity0", "@entity3"], "title": "Does a standard measure of self-reported physical disability correlate with clinician perception of impairment related to XXXX screening?\n" } ``` #### biomrc_small_A - **Size of downloaded dataset files:** 68.88 MB - **Size of the generated dataset:** 236.32 MB - **Total amount of disk used:** 305.20 MB An example of 'validation' looks as follows. ``` This example was too long and was cropped: { "abstract": "\"PURPOSE: @entity120 ( @entity120 ) is a life-limiting @entity102 that presents as an elevated blood pressure in the pulmonary a...", "answer": "@entity148 :: (MESH:D001008,Disease) :: ['anxiety']\n", "entities_list": "[\"@entity1 :: ('9606', 'Species') :: ['patients']\", \"@entity308 :: ('MESH:D003866', 'Disease') :: ['depression']\", \"@entity146 :...", "title": "A predictive model of the effects of @entity308 , XXXX , stress, 6-minute-walk distance, and social support on health-related quality of life in an adult pulmonary hypertension population.\n" } ``` #### biomrc_small_B - **Size of downloaded dataset files:** 57.70 MB - **Size of the generated dataset:** 189.62 MB - **Total amount of disk used:** 247.33 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "abstract": "\"Single-agent activity for @entity12 reflected by response rates of 10%-30% has been reported in @entity0 with @entity3 ( @entit...", "answer": "@entity10", "entities_list": ["@entity0", "@entity6", "@entity2", "@entity5", "@entity12", "@entity11", "@entity1", "@entity7", "@entity9", "@entity10", "@entity3", "@entity4", "@entity8"], "title": "No synergistic activity of @entity7 and XXXX in the treatment of @entity3 .\n" } ``` #### biomrc_tiny_A - **Size of downloaded dataset files:** 0.02 MB - **Size of the generated dataset:** 0.07 MB - **Total amount of disk used:** 0.09 MB An example of 'test' looks as follows. ``` This example was too long and was cropped: { "abstract": "\"OBJECTIVE: Decompressive craniectomy (DC) requires later cranioplasty (CP) in survivors. However, if additional ventriculoperit...", "answer": "@entity260 :: (MESH:D011183,Disease) :: ['Postoperative Complications']\n", "entities_list": ["@entity1 :: ('9606', 'Species') :: ['Patients', 'patients', 'Patient']", "@entity260 :: ('MESH:D011183', 'Disease') :: ['VPS regarding postoperative complications']", "@entity1276 :: ('MESH:D006849', 'Disease') :: ['hydrocephalus']"], "title": "Cranioplasty and Ventriculoperitoneal Shunt Placement after Decompressive Craniectomy: Staged Surgery Is Associated with Fewer XXXX .\n" } ``` ### Data Fields The data fields are the same among all splits. #### biomrc_large_A - `abstract`: a `string` feature. - `title`: a `string` feature. - `entities_list`: a `list` of `string` features. - `answer`: a `string` feature. #### biomrc_large_B - `abstract`: a `string` feature. - `title`: a `string` feature. - `entities_list`: a `list` of `string` features. - `answer`: a `string` feature. #### biomrc_small_A - `abstract`: a `string` feature. - `title`: a `string` feature. - `entities_list`: a `list` of `string` features. - `answer`: a `string` feature. #### biomrc_small_B - `abstract`: a `string` feature. - `title`: a `string` feature. - `entities_list`: a `list` of `string` features. - `answer`: a `string` feature. #### biomrc_tiny_A - `abstract`: a `string` feature. - `title`: a `string` feature. - `entities_list`: a `list` of `string` features. - `answer`: a `string` feature. ### Data Splits #### biomrc_large_A | |train |validation|test | |--------------|-----:|---------:|----:| |biomrc_large_A|700000| 50000|62707| #### biomrc_large_B | |train |validation|test | |--------------|-----:|---------:|----:| |biomrc_large_B|700000| 50000|62707| #### biomrc_small_A | |train|validation|test| |--------------|----:|---------:|---:| |biomrc_small_A|87500| 6250|6250| #### biomrc_small_B | |train|validation|test| |--------------|----:|---------:|---:| |biomrc_small_B|87500| 6250|6250| #### biomrc_tiny_A | |test| |-------------|---:| |biomrc_tiny_A| 30| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{pappas-etal-2020-biomrc, title = "{B}io{MRC}: A Dataset for Biomedical Machine Reading Comprehension", author = "Pappas, Dimitris and Stavropoulos, Petros and Androutsopoulos, Ion and McDonald, Ryan", booktitle = "Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.bionlp-1.15", pages = "140--149", abstract = "We introduce BIOMRC, a large-scale cloze-style biomedical MRC dataset. Care was taken to reduce noise, compared to the previous BIOREAD dataset of Pappas et al. (2018). Experiments show that simple heuristics do not perform well on the new dataset and that two neural MRC models that had been tested on BIOREAD perform much better on BIOMRC, indicating that the new dataset is indeed less noisy or at least that its task is more feasible. Non-expert human performance is also higher on the new dataset compared to BIOREAD, and biomedical experts perform even better. We also introduce a new BERT-based MRC model, the best version of which substantially outperforms all other methods tested, reaching or surpassing the accuracy of biomedical experts in some experiments. We make the new dataset available in three different sizes, also releasing our code, and providing a leaderboard.", } ``` ### Contributions Thanks to [@lewtun](https://github.com/lewtun), [@PetrosStav](https://github.com/PetrosStav), [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
biomrc
[ "language:en", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["en"], "paperswithcode_id": "biomrc", "pretty_name": "BIOMRC", "dataset_info": [{"config_name": "plain_text", "features": [{"name": "abstract", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "entities_list", "sequence": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1653301820, "num_examples": 700000}, {"name": "validation", "num_bytes": 119697683, "num_examples": 50000}, {"name": "test", "num_bytes": 147832373, "num_examples": 62707}], "download_size": 408080356, "dataset_size": 1920831876}, {"config_name": "biomrc_large_A", "features": [{"name": "abstract", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "entities_list", "sequence": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1653301820, "num_examples": 700000}, {"name": "validation", "num_bytes": 119697683, "num_examples": 50000}, {"name": "test", "num_bytes": 147832373, "num_examples": 62707}], "download_size": 408080356, "dataset_size": 1920831876}, {"config_name": "biomrc_large_B", "features": [{"name": "abstract", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "entities_list", "sequence": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1325877001, "num_examples": 700000}, {"name": "validation", "num_bytes": 96414040, "num_examples": 50000}, {"name": "test", "num_bytes": 118708586, "num_examples": 62707}], "download_size": 343061539, "dataset_size": 1540999627}, {"config_name": "biomrc_small_A", "features": [{"name": "abstract", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "entities_list", "sequence": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 206553549, "num_examples": 87500}, {"name": "validation", "num_bytes": 14957163, "num_examples": 6250}, {"name": "test", "num_bytes": 14807799, "num_examples": 6250}], "download_size": 68879274, "dataset_size": 236318511}, {"config_name": "biomrc_small_B", "features": [{"name": "abstract", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "entities_list", "sequence": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 165662937, "num_examples": 87500}, {"name": "validation", "num_bytes": 12047304, "num_examples": 6250}, {"name": "test", "num_bytes": 11911172, "num_examples": 6250}], "download_size": 57706889, "dataset_size": 189621413}, {"config_name": "biomrc_tiny_A", "features": [{"name": "abstract", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "entities_list", "sequence": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 70914, "num_examples": 30}], "download_size": 22519, "dataset_size": 70914}, {"config_name": "biomrc_tiny_B", "features": [{"name": "abstract", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "entities_list", "sequence": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 59925, "num_examples": 30}], "download_size": 19685, "dataset_size": 59925}]}
2024-01-18T11:02:01+00:00
[]
[ "en" ]
TAGS #language-English #region-us
Dataset Card for "biomrc" ========================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: * Point of Contact: * Size of downloaded dataset files: 1.29 GB * Size of the generated dataset: 5.81 GB * Total amount of disk used: 7.09 GB ### Dataset Summary We introduce BIOMRC, a large-scale cloze-style biomedical MRC dataset. Care was taken to reduce noise, compared to the previous BIOREAD dataset of Pappas et al. (2018). Experiments show that simple heuristics do not perform well on the new dataset and that two neural MRC models that had been tested on BIOREAD perform much better on BIOMRC, indicating that the new dataset is indeed less noisy or at least that its task is more feasible. Non-expert human performance is also higher on the new dataset compared to BIOREAD, and biomedical experts perform even better. We also introduce a new BERT-based MRC model, the best version of which substantially outperforms all other methods tested, reaching or surpassing the accuracy of biomedical experts in some experiments. We make the new dataset available in three different sizes, also releasing our code, and providing a leaderboard. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### biomrc\_large\_A * Size of downloaded dataset files: 408.08 MB * Size of the generated dataset: 1.92 GB * Total amount of disk used: 2.33 GB An example of 'train' looks as follows. #### biomrc\_large\_B * Size of downloaded dataset files: 343.06 MB * Size of the generated dataset: 1.54 GB * Total amount of disk used: 1.88 GB An example of 'train' looks as follows. #### biomrc\_small\_A * Size of downloaded dataset files: 68.88 MB * Size of the generated dataset: 236.32 MB * Total amount of disk used: 305.20 MB An example of 'validation' looks as follows. #### biomrc\_small\_B * Size of downloaded dataset files: 57.70 MB * Size of the generated dataset: 189.62 MB * Total amount of disk used: 247.33 MB An example of 'train' looks as follows. #### biomrc\_tiny\_A * Size of downloaded dataset files: 0.02 MB * Size of the generated dataset: 0.07 MB * Total amount of disk used: 0.09 MB An example of 'test' looks as follows. ### Data Fields The data fields are the same among all splits. #### biomrc\_large\_A * 'abstract': a 'string' feature. * 'title': a 'string' feature. * 'entities\_list': a 'list' of 'string' features. * 'answer': a 'string' feature. #### biomrc\_large\_B * 'abstract': a 'string' feature. * 'title': a 'string' feature. * 'entities\_list': a 'list' of 'string' features. * 'answer': a 'string' feature. #### biomrc\_small\_A * 'abstract': a 'string' feature. * 'title': a 'string' feature. * 'entities\_list': a 'list' of 'string' features. * 'answer': a 'string' feature. #### biomrc\_small\_B * 'abstract': a 'string' feature. * 'title': a 'string' feature. * 'entities\_list': a 'list' of 'string' features. * 'answer': a 'string' feature. #### biomrc\_tiny\_A * 'abstract': a 'string' feature. * 'title': a 'string' feature. * 'entities\_list': a 'list' of 'string' features. * 'answer': a 'string' feature. ### Data Splits #### biomrc\_large\_A #### biomrc\_large\_B #### biomrc\_small\_A #### biomrc\_small\_B #### biomrc\_tiny\_A Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @lewtun, @PetrosStav, @lhoestq, @thomwolf for adding this dataset.
[ "### Dataset Summary\n\n\nWe introduce BIOMRC, a large-scale cloze-style biomedical MRC dataset. Care was taken to reduce noise, compared to the previous BIOREAD dataset of Pappas et al. (2018). Experiments show that simple heuristics do not perform well on the new dataset and that two neural MRC models that had been tested on BIOREAD perform much better on BIOMRC, indicating that the new dataset is indeed less noisy or at least that its task is more feasible. Non-expert human performance is also higher on the new dataset compared to BIOREAD, and biomedical experts perform even better. We also introduce a new BERT-based MRC model, the best version of which substantially outperforms all other methods tested, reaching or surpassing the accuracy of biomedical experts in some experiments. We make the new dataset available in three different sizes, also releasing our code, and providing a leaderboard.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### biomrc\\_large\\_A\n\n\n* Size of downloaded dataset files: 408.08 MB\n* Size of the generated dataset: 1.92 GB\n* Total amount of disk used: 2.33 GB\n\n\nAn example of 'train' looks as follows.", "#### biomrc\\_large\\_B\n\n\n* Size of downloaded dataset files: 343.06 MB\n* Size of the generated dataset: 1.54 GB\n* Total amount of disk used: 1.88 GB\n\n\nAn example of 'train' looks as follows.", "#### biomrc\\_small\\_A\n\n\n* Size of downloaded dataset files: 68.88 MB\n* Size of the generated dataset: 236.32 MB\n* Total amount of disk used: 305.20 MB\n\n\nAn example of 'validation' looks as follows.", "#### biomrc\\_small\\_B\n\n\n* Size of downloaded dataset files: 57.70 MB\n* Size of the generated dataset: 189.62 MB\n* Total amount of disk used: 247.33 MB\n\n\nAn example of 'train' looks as follows.", "#### biomrc\\_tiny\\_A\n\n\n* Size of downloaded dataset files: 0.02 MB\n* Size of the generated dataset: 0.07 MB\n* Total amount of disk used: 0.09 MB\n\n\nAn example of 'test' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### biomrc\\_large\\_A\n\n\n* 'abstract': a 'string' feature.\n* 'title': a 'string' feature.\n* 'entities\\_list': a 'list' of 'string' features.\n* 'answer': a 'string' feature.", "#### biomrc\\_large\\_B\n\n\n* 'abstract': a 'string' feature.\n* 'title': a 'string' feature.\n* 'entities\\_list': a 'list' of 'string' features.\n* 'answer': a 'string' feature.", "#### biomrc\\_small\\_A\n\n\n* 'abstract': a 'string' feature.\n* 'title': a 'string' feature.\n* 'entities\\_list': a 'list' of 'string' features.\n* 'answer': a 'string' feature.", "#### biomrc\\_small\\_B\n\n\n* 'abstract': a 'string' feature.\n* 'title': a 'string' feature.\n* 'entities\\_list': a 'list' of 'string' features.\n* 'answer': a 'string' feature.", "#### biomrc\\_tiny\\_A\n\n\n* 'abstract': a 'string' feature.\n* 'title': a 'string' feature.\n* 'entities\\_list': a 'list' of 'string' features.\n* 'answer': a 'string' feature.", "### Data Splits", "#### biomrc\\_large\\_A", "#### biomrc\\_large\\_B", "#### biomrc\\_small\\_A", "#### biomrc\\_small\\_B", "#### biomrc\\_tiny\\_A\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @lewtun, @PetrosStav, @lhoestq, @thomwolf for adding this dataset." ]
[ "TAGS\n#language-English #region-us \n", "### Dataset Summary\n\n\nWe introduce BIOMRC, a large-scale cloze-style biomedical MRC dataset. Care was taken to reduce noise, compared to the previous BIOREAD dataset of Pappas et al. (2018). Experiments show that simple heuristics do not perform well on the new dataset and that two neural MRC models that had been tested on BIOREAD perform much better on BIOMRC, indicating that the new dataset is indeed less noisy or at least that its task is more feasible. Non-expert human performance is also higher on the new dataset compared to BIOREAD, and biomedical experts perform even better. We also introduce a new BERT-based MRC model, the best version of which substantially outperforms all other methods tested, reaching or surpassing the accuracy of biomedical experts in some experiments. We make the new dataset available in three different sizes, also releasing our code, and providing a leaderboard.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### biomrc\\_large\\_A\n\n\n* Size of downloaded dataset files: 408.08 MB\n* Size of the generated dataset: 1.92 GB\n* Total amount of disk used: 2.33 GB\n\n\nAn example of 'train' looks as follows.", "#### biomrc\\_large\\_B\n\n\n* Size of downloaded dataset files: 343.06 MB\n* Size of the generated dataset: 1.54 GB\n* Total amount of disk used: 1.88 GB\n\n\nAn example of 'train' looks as follows.", "#### biomrc\\_small\\_A\n\n\n* Size of downloaded dataset files: 68.88 MB\n* Size of the generated dataset: 236.32 MB\n* Total amount of disk used: 305.20 MB\n\n\nAn example of 'validation' looks as follows.", "#### biomrc\\_small\\_B\n\n\n* Size of downloaded dataset files: 57.70 MB\n* Size of the generated dataset: 189.62 MB\n* Total amount of disk used: 247.33 MB\n\n\nAn example of 'train' looks as follows.", "#### biomrc\\_tiny\\_A\n\n\n* Size of downloaded dataset files: 0.02 MB\n* Size of the generated dataset: 0.07 MB\n* Total amount of disk used: 0.09 MB\n\n\nAn example of 'test' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### biomrc\\_large\\_A\n\n\n* 'abstract': a 'string' feature.\n* 'title': a 'string' feature.\n* 'entities\\_list': a 'list' of 'string' features.\n* 'answer': a 'string' feature.", "#### biomrc\\_large\\_B\n\n\n* 'abstract': a 'string' feature.\n* 'title': a 'string' feature.\n* 'entities\\_list': a 'list' of 'string' features.\n* 'answer': a 'string' feature.", "#### biomrc\\_small\\_A\n\n\n* 'abstract': a 'string' feature.\n* 'title': a 'string' feature.\n* 'entities\\_list': a 'list' of 'string' features.\n* 'answer': a 'string' feature.", "#### biomrc\\_small\\_B\n\n\n* 'abstract': a 'string' feature.\n* 'title': a 'string' feature.\n* 'entities\\_list': a 'list' of 'string' features.\n* 'answer': a 'string' feature.", "#### biomrc\\_tiny\\_A\n\n\n* 'abstract': a 'string' feature.\n* 'title': a 'string' feature.\n* 'entities\\_list': a 'list' of 'string' features.\n* 'answer': a 'string' feature.", "### Data Splits", "#### biomrc\\_large\\_A", "#### biomrc\\_large\\_B", "#### biomrc\\_small\\_A", "#### biomrc\\_small\\_B", "#### biomrc\\_tiny\\_A\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @lewtun, @PetrosStav, @lhoestq, @thomwolf for adding this dataset." ]
[ 10, 224, 10, 11, 6, 59, 59, 62, 61, 56, 17, 67, 67, 67, 67, 66, 5, 12, 12, 12, 12, 17, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 32 ]
[ "passage: TAGS\n#language-English #region-us \n### Dataset Summary\n\n\nWe introduce BIOMRC, a large-scale cloze-style biomedical MRC dataset. Care was taken to reduce noise, compared to the previous BIOREAD dataset of Pappas et al. (2018). Experiments show that simple heuristics do not perform well on the new dataset and that two neural MRC models that had been tested on BIOREAD perform much better on BIOMRC, indicating that the new dataset is indeed less noisy or at least that its task is more feasible. Non-expert human performance is also higher on the new dataset compared to BIOREAD, and biomedical experts perform even better. We also introduce a new BERT-based MRC model, the best version of which substantially outperforms all other methods tested, reaching or surpassing the accuracy of biomedical experts in some experiments. We make the new dataset available in three different sizes, also releasing our code, and providing a leaderboard.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### biomrc\\_large\\_A\n\n\n* Size of downloaded dataset files: 408.08 MB\n* Size of the generated dataset: 1.92 GB\n* Total amount of disk used: 2.33 GB\n\n\nAn example of 'train' looks as follows.#### biomrc\\_large\\_B\n\n\n* Size of downloaded dataset files: 343.06 MB\n* Size of the generated dataset: 1.54 GB\n* Total amount of disk used: 1.88 GB\n\n\nAn example of 'train' looks as follows.#### biomrc\\_small\\_A\n\n\n* Size of downloaded dataset files: 68.88 MB\n* Size of the generated dataset: 236.32 MB\n* Total amount of disk used: 305.20 MB\n\n\nAn example of 'validation' looks as follows.#### biomrc\\_small\\_B\n\n\n* Size of downloaded dataset files: 57.70 MB\n* Size of the generated dataset: 189.62 MB\n* Total amount of disk used: 247.33 MB\n\n\nAn example of 'train' looks as follows.", "passage: #### biomrc\\_tiny\\_A\n\n\n* Size of downloaded dataset files: 0.02 MB\n* Size of the generated dataset: 0.07 MB\n* Total amount of disk used: 0.09 MB\n\n\nAn example of 'test' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### biomrc\\_large\\_A\n\n\n* 'abstract': a 'string' feature.\n* 'title': a 'string' feature.\n* 'entities\\_list': a 'list' of 'string' features.\n* 'answer': a 'string' feature.#### biomrc\\_large\\_B\n\n\n* 'abstract': a 'string' feature.\n* 'title': a 'string' feature.\n* 'entities\\_list': a 'list' of 'string' features.\n* 'answer': a 'string' feature.#### biomrc\\_small\\_A\n\n\n* 'abstract': a 'string' feature.\n* 'title': a 'string' feature.\n* 'entities\\_list': a 'list' of 'string' features.\n* 'answer': a 'string' feature.#### biomrc\\_small\\_B\n\n\n* 'abstract': a 'string' feature.\n* 'title': a 'string' feature.\n* 'entities\\_list': a 'list' of 'string' features.\n* 'answer': a 'string' feature.#### biomrc\\_tiny\\_A\n\n\n* 'abstract': a 'string' feature.\n* 'title': a 'string' feature.\n* 'entities\\_list': a 'list' of 'string' features.\n* 'answer': a 'string' feature.### Data Splits#### biomrc\\_large\\_A#### biomrc\\_large\\_B#### biomrc\\_small\\_A#### biomrc\\_small\\_B#### biomrc\\_tiny\\_A\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases" ]
2394a2eda8dae34a30f68f0770775fd5c2e863bd
# Dataset Card for BIOSSES ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://tabilab.cmpe.boun.edu.tr/BIOSSES/DataSet.html - **Repository:** https://github.com/gizemsogancioglu/biosses - **Paper:** [BIOSSES: a semantic sentence similarity estimation system for the biomedical domain](https://academic.oup.com/bioinformatics/article/33/14/i49/3953954) - **Point of Contact:** [Gizem Soğancıoğlu](gizemsogancioglu@gmail.com) and [Arzucan Özgür](gizemsogancioglu@gmail.com) ### Dataset Summary BIOSSES is a benchmark dataset for biomedical sentence similarity estimation. The dataset comprises 100 sentence pairs, in which each sentence was selected from the [TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset](https://tac.nist.gov/2014/BiomedSumm/) containing articles from the biomedical domain. The sentence pairs in BIOSSES were selected from citing sentences, i.e. sentences that have a citation to a reference article. The sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). In the original paper the mean of the scores assigned by the five human annotators was taken as the gold standard. The Pearson correlation between the gold standard scores and the scores estimated by the models was used as the evaluation metric. The strength of correlation can be assessed by the general guideline proposed by Evans (1996) as follows: - very strong: 0.80–1.00 - strong: 0.60–0.79 - moderate: 0.40–0.59 - weak: 0.20–0.39 - very weak: 0.00–0.19 ### Supported Tasks and Leaderboards Biomedical Semantic Similarity Scoring. ### Languages English. ## Dataset Structure ### Data Instances For each instance, there are two sentences (i.e. sentence 1 and 2), and its corresponding similarity score (the mean of the scores assigned by the five human annotators). ``` {'sentence 1': 'Here, looking for agents that could specifically kill KRAS mutant cells, they found that knockdown of GATA2 was synthetically lethal with KRAS mutation' 'sentence 2': 'Not surprisingly, GATA2 knockdown in KRAS mutant cells resulted in a striking reduction of active GTP-bound RHO proteins, including the downstream ROCK kinase' 'score': 2.2} ``` ### Data Fields - `sentence 1`: string - `sentence 2`: string - `score`: float ranging from 0 (no relation) to 4 (equivalent) ### Data Splits No data splits provided. ## Dataset Creation ### Curation Rationale ### Source Data The [TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset](https://tac.nist.gov/2014/BiomedSumm/). #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process The sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). The score range was described based on the guidelines of SemEval 2012 Task 6 on STS (Agirre et al., 2012). Besides the annotation instructions, example sentences from the biomedical literature were provided to the annotators for each of the similarity degrees. The table below shows the Pearson correlation of the scores of each annotator with respect to the average scores of the remaining four annotators. It is observed that there is strong association among the scores of the annotators. The lowest correlations are 0.902, which can be considered as an upper bound for an algorithmic measure evaluated on this dataset. | |Correlation r | |----------:|--------------:| |Annotator A| 0.952| |Annotator B| 0.958| |Annotator C| 0.917| |Annotator D| 0.902| |Annotator E| 0.941| #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators - Gizem Soğancıoğlu, gizemsogancioglu@gmail.com - Hakime Öztürk, hakime.ozturk@boun.edu.tr - Arzucan Özgür, gizemsogancioglu@gmail.com Bogazici University, Istanbul, Turkey ### Licensing Information BIOSSES is made available under the terms of [The GNU Common Public License v.3.0](https://www.gnu.org/licenses/gpl-3.0.en.html). ### Citation Information @article{souganciouglu2017biosses, title={BIOSSES: a semantic sentence similarity estimation system for the biomedical domain}, author={So{\u{g}}anc{\i}o{\u{g}}lu, Gizem and {\"O}zt{\"u}rk, Hakime and {\"O}zg{\"u}r, Arzucan}, journal={Bioinformatics}, volume={33}, number={14}, pages={i49--i58}, year={2017}, publisher={Oxford University Press} } ### Contributions Thanks to [@bwang482](https://github.com/bwang482) for adding this dataset.
biosses
[ "task_categories:text-classification", "task_ids:text-scoring", "task_ids:semantic-similarity-scoring", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:en", "license:gpl-3.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["gpl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["text-scoring", "semantic-similarity-scoring"], "paperswithcode_id": "biosses", "pretty_name": "BIOSSES", "dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "score", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 32775, "num_examples": 100}], "download_size": 23090, "dataset_size": 32775}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-10T10:20:02+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-text-scoring #task_ids-semantic-similarity-scoring #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-gpl-3.0 #region-us
Dataset Card for BIOSSES ======================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: BIOSSES: a semantic sentence similarity estimation system for the biomedical domain * Point of Contact: Gizem Soğancıoğlu and Arzucan Özgür ### Dataset Summary BIOSSES is a benchmark dataset for biomedical sentence similarity estimation. The dataset comprises 100 sentence pairs, in which each sentence was selected from the TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset containing articles from the biomedical domain. The sentence pairs in BIOSSES were selected from citing sentences, i.e. sentences that have a citation to a reference article. The sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). In the original paper the mean of the scores assigned by the five human annotators was taken as the gold standard. The Pearson correlation between the gold standard scores and the scores estimated by the models was used as the evaluation metric. The strength of correlation can be assessed by the general guideline proposed by Evans (1996) as follows: * very strong: 0.80–1.00 * strong: 0.60–0.79 * moderate: 0.40–0.59 * weak: 0.20–0.39 * very weak: 0.00–0.19 ### Supported Tasks and Leaderboards Biomedical Semantic Similarity Scoring. ### Languages English. Dataset Structure ----------------- ### Data Instances For each instance, there are two sentences (i.e. sentence 1 and 2), and its corresponding similarity score (the mean of the scores assigned by the five human annotators). ### Data Fields * 'sentence 1': string * 'sentence 2': string * 'score': float ranging from 0 (no relation) to 4 (equivalent) ### Data Splits No data splits provided. Dataset Creation ---------------- ### Curation Rationale ### Source Data The TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset. #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process The sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). The score range was described based on the guidelines of SemEval 2012 Task 6 on STS (Agirre et al., 2012). Besides the annotation instructions, example sentences from the biomedical literature were provided to the annotators for each of the similarity degrees. The table below shows the Pearson correlation of the scores of each annotator with respect to the average scores of the remaining four annotators. It is observed that there is strong association among the scores of the annotators. The lowest correlations are 0.902, which can be considered as an upper bound for an algorithmic measure evaluated on this dataset. #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators * Gizem Soğancıoğlu, gizemsogancioglu@URL * Hakime Öztürk, URL@URL * Arzucan Özgür, gizemsogancioglu@URL Bogazici University, Istanbul, Turkey ### Licensing Information BIOSSES is made available under the terms of The GNU Common Public License v.3.0. @article{souganciouglu2017biosses, title={BIOSSES: a semantic sentence similarity estimation system for the biomedical domain}, author={So{\u{g}}anc{\i}o{\u{g}}lu, Gizem and {"O}zt{"u}rk, Hakime and {"O}zg{"u}r, Arzucan}, journal={Bioinformatics}, volume={33}, number={14}, pages={i49--i58}, year={2017}, publisher={Oxford University Press} } ### Contributions Thanks to @bwang482 for adding this dataset.
[ "### Dataset Summary\n\n\nBIOSSES is a benchmark dataset for biomedical sentence similarity estimation. The dataset comprises 100 sentence pairs, in which each sentence was selected from the TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset containing articles from the biomedical domain. The sentence pairs in BIOSSES were selected from citing sentences, i.e. sentences that have a citation to a reference article.\n\n\nThe sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). In the original paper the mean of the scores assigned by the five human annotators was taken as the gold standard. The Pearson correlation between the gold standard scores and the scores estimated by the models was used as the evaluation metric. The strength of correlation can be assessed by the general guideline proposed by Evans (1996) as follows:\n\n\n* very strong: 0.80–1.00\n* strong: 0.60–0.79\n* moderate: 0.40–0.59\n* weak: 0.20–0.39\n* very weak: 0.00–0.19", "### Supported Tasks and Leaderboards\n\n\nBiomedical Semantic Similarity Scoring.", "### Languages\n\n\nEnglish.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nFor each instance, there are two sentences (i.e. sentence 1 and 2), and its corresponding similarity score (the mean of the scores assigned by the five human annotators).", "### Data Fields\n\n\n* 'sentence 1': string\n* 'sentence 2': string\n* 'score': float ranging from 0 (no relation) to 4 (equivalent)", "### Data Splits\n\n\nNo data splits provided.\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data\n\n\nThe TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset.", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\n\nThe sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). The score range was described based on the guidelines of SemEval 2012 Task 6 on STS (Agirre et al., 2012). Besides the annotation instructions, example sentences from the biomedical literature were provided to the annotators for each of the similarity degrees.\n\n\nThe table below shows the Pearson correlation of the scores of each annotator with respect to the average scores of the remaining four annotators. It is observed that there is strong association among the scores of the annotators. The lowest correlations are 0.902, which can be considered as an upper bound for an algorithmic measure evaluated on this dataset.", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\n* Gizem Soğancıoğlu, gizemsogancioglu@URL\n* Hakime Öztürk, URL@URL\n* Arzucan Özgür, gizemsogancioglu@URL\nBogazici University, Istanbul, Turkey", "### Licensing Information\n\n\nBIOSSES is made available under the terms of The GNU Common Public License v.3.0.\n\n\n@article{souganciouglu2017biosses,\ntitle={BIOSSES: a semantic sentence similarity estimation system for the biomedical domain},\nauthor={So{\\u{g}}anc{\\i}o{\\u{g}}lu, Gizem and {\"O}zt{\"u}rk, Hakime and {\"O}zg{\"u}r, Arzucan},\njournal={Bioinformatics},\nvolume={33},\nnumber={14},\npages={i49--i58},\nyear={2017},\npublisher={Oxford University Press}\n}", "### Contributions\n\n\nThanks to @bwang482 for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-text-scoring #task_ids-semantic-similarity-scoring #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-gpl-3.0 #region-us \n", "### Dataset Summary\n\n\nBIOSSES is a benchmark dataset for biomedical sentence similarity estimation. The dataset comprises 100 sentence pairs, in which each sentence was selected from the TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset containing articles from the biomedical domain. The sentence pairs in BIOSSES were selected from citing sentences, i.e. sentences that have a citation to a reference article.\n\n\nThe sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). In the original paper the mean of the scores assigned by the five human annotators was taken as the gold standard. The Pearson correlation between the gold standard scores and the scores estimated by the models was used as the evaluation metric. The strength of correlation can be assessed by the general guideline proposed by Evans (1996) as follows:\n\n\n* very strong: 0.80–1.00\n* strong: 0.60–0.79\n* moderate: 0.40–0.59\n* weak: 0.20–0.39\n* very weak: 0.00–0.19", "### Supported Tasks and Leaderboards\n\n\nBiomedical Semantic Similarity Scoring.", "### Languages\n\n\nEnglish.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nFor each instance, there are two sentences (i.e. sentence 1 and 2), and its corresponding similarity score (the mean of the scores assigned by the five human annotators).", "### Data Fields\n\n\n* 'sentence 1': string\n* 'sentence 2': string\n* 'score': float ranging from 0 (no relation) to 4 (equivalent)", "### Data Splits\n\n\nNo data splits provided.\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data\n\n\nThe TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset.", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\n\nThe sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). The score range was described based on the guidelines of SemEval 2012 Task 6 on STS (Agirre et al., 2012). Besides the annotation instructions, example sentences from the biomedical literature were provided to the annotators for each of the similarity degrees.\n\n\nThe table below shows the Pearson correlation of the scores of each annotator with respect to the average scores of the remaining four annotators. It is observed that there is strong association among the scores of the annotators. The lowest correlations are 0.902, which can be considered as an upper bound for an algorithmic measure evaluated on this dataset.", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\n* Gizem Soğancıoğlu, gizemsogancioglu@URL\n* Hakime Öztürk, URL@URL\n* Arzucan Özgür, gizemsogancioglu@URL\nBogazici University, Istanbul, Turkey", "### Licensing Information\n\n\nBIOSSES is made available under the terms of The GNU Common Public License v.3.0.\n\n\n@article{souganciouglu2017biosses,\ntitle={BIOSSES: a semantic sentence similarity estimation system for the biomedical domain},\nauthor={So{\\u{g}}anc{\\i}o{\\u{g}}lu, Gizem and {\"O}zt{\"u}rk, Hakime and {\"O}zg{\"u}r, Arzucan},\njournal={Bioinformatics},\nvolume={33},\nnumber={14},\npages={i49--i58},\nyear={2017},\npublisher={Oxford University Press}\n}", "### Contributions\n\n\nThanks to @bwang482 for adding this dataset." ]
[ 101, 263, 20, 13, 49, 44, 17, 7, 24, 10, 10, 5, 191, 9, 18, 7, 8, 14, 52, 163, 18 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-text-scoring #task_ids-semantic-similarity-scoring #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-gpl-3.0 #region-us \n### Dataset Summary\n\n\nBIOSSES is a benchmark dataset for biomedical sentence similarity estimation. The dataset comprises 100 sentence pairs, in which each sentence was selected from the TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset containing articles from the biomedical domain. The sentence pairs in BIOSSES were selected from citing sentences, i.e. sentences that have a citation to a reference article.\n\n\nThe sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). In the original paper the mean of the scores assigned by the five human annotators was taken as the gold standard. The Pearson correlation between the gold standard scores and the scores estimated by the models was used as the evaluation metric. The strength of correlation can be assessed by the general guideline proposed by Evans (1996) as follows:\n\n\n* very strong: 0.80–1.00\n* strong: 0.60–0.79\n* moderate: 0.40–0.59\n* weak: 0.20–0.39\n* very weak: 0.00–0.19### Supported Tasks and Leaderboards\n\n\nBiomedical Semantic Similarity Scoring.### Languages\n\n\nEnglish.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nFor each instance, there are two sentences (i.e. sentence 1 and 2), and its corresponding similarity score (the mean of the scores assigned by the five human annotators).### Data Fields\n\n\n* 'sentence 1': string\n* 'sentence 2': string\n* 'score': float ranging from 0 (no relation) to 4 (equivalent)### Data Splits\n\n\nNo data splits provided.\n\n\nDataset Creation\n----------------", "passage: ### Curation Rationale### Source Data\n\n\nThe TAC (Text Analysis Conference) Biomedical Summarization Track Training Dataset.#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process\n\n\nThe sentence pairs were evaluated by five different human experts that judged their similarity and gave scores ranging from 0 (no relation) to 4 (equivalent). The score range was described based on the guidelines of SemEval 2012 Task 6 on STS (Agirre et al., 2012). Besides the annotation instructions, example sentences from the biomedical literature were provided to the annotators for each of the similarity degrees.\n\n\nThe table below shows the Pearson correlation of the scores of each annotator with respect to the average scores of the remaining four annotators. It is observed that there is strong association among the scores of the annotators. The lowest correlations are 0.902, which can be considered as an upper bound for an algorithmic measure evaluated on this dataset.#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators\n\n\n* Gizem Soğancıoğlu, gizemsogancioglu@URL\n* Hakime Öztürk, URL@URL\n* Arzucan Özgür, gizemsogancioglu@URL\nBogazici University, Istanbul, Turkey" ]
37e47767e1b557bdc6ffbb37115d7784f8694f22
# Dataset Card for British Library Books ## Table of Contents - [Dataset Card for British Library Books](#dataset-card-for-British-Library-Books) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Language model training](#language-model-training) - [Supervised tasks](#supervised-tasks) - [Languages](#languages) - [Language change](#language-change) - [Optical Character Recognition](#optical-character-recognition) - [OCR word confidence](#ocr-word-confidence) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Date normalization](#date-normalization) - [Metadata included](#metadata-included) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Colonialism](#colonialism) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.bl.uk/collection-guides/digitised-printed-books - **Repository:** https://doi.org/10.21250/db14 - **Paper:** - **Leaderboard:** - **Point of Contact:** labs@bl.uk ### Dataset Summary This dataset consists of books digitised by the British Library in partnership with Microsoft. The dataset includes ~25 million pages of out of copyright texts. The majority of the texts were published in the 18th and 19th Century, but the collection also consists of a smaller number of books from earlier periods. Items within this collection cover a wide range of subject areas, including geography, philosophy, history, poetry and literature and are published in various languages. While the books are predominately from the 18th and 19th Centuries, there are fewer books from earlier periods. The number of pages in the corpus by decade: | | page count | | ---- | ---------- | | 1510 | 94 | | 1520 | 32 | | 1540 | 184 | | 1550 | 16 | | 1580 | 276 | | 1590 | 540 | | 1600 | 1117 | | 1610 | 1132 | | 1620 | 1856 | | 1630 | 9274 | | 1640 | 4232 | | 1650 | 2944 | | 1660 | 5858 | | 1670 | 11415 | | 1680 | 8348 | | 1690 | 13756 | | 1700 | 10160 | | 1710 | 9556 | | 1720 | 10314 | | 1730 | 13282 | | 1740 | 10778 | | 1750 | 12001 | | 1760 | 21415 | | 1770 | 28490 | | 1780 | 32676 | | 1790 | 50014 | | 1800 | 307806 | | 1810 | 478008 | | 1820 | 589419 | | 1830 | 681212 | | 1840 | 1113473 | | 1850 | 1726108 | | 1860 | 1725407 | | 1870 | 2069089 | | 1880 | 2585159 | | 1890 | 3365031 | [More Information Needed] ### Supported Tasks and Leaderboards This collection has been previously used across various digital history and humanities projects since being published. The dataset consists of text and a range of metadata associated with this text. This metadata includes: - date of publication - place of publication - country of publication - language - OCR quality - physical description of the original physical item #### Language model training As a relatively large dataset, `blbooks` provides a source dataset for training language models. The presence of this metadata also offers interesting opportunities to use this dataset as a source for training language models based on: - specific time-periods - specific languages - certain OCR quality thresholds The above is not an exhaustive list but offer some suggestions of how the dataset can be used to explore topics such as the impact of OCR quality on language models, the ‘transferability’ of language models across time or the impact of training multilingual language models on historical languages. #### Supervised tasks Whilst this dataset does not have annotations for a specific NLP task, such as Named Entity Recognition, it does include a wide variety of metadata. This metadata has the potential to be used for training and/or evaluating a variety of supervised tasks predicting this metadata. ### Languages This dataset consists of books published in several languages. The breakdown of the languages included (at the page level) is: | Language | Pages | | --------------------- | -------- | | English | 10039463 | | French | 1442929 | | German | 1172793 | | Spanish | 286778 | | Italian | 214255 | | Dutch | 204759 | | Russian | 193347 | | Danish | 93366 | | Hungarian | 88094 | | Swedish | 76225 | | Polish | 58901 | | Greek, Modern (1453-) | 26104 | | Latin | 25611 | | Portuguese | 25410 | | Czech | 20160 | | Bulgarian | 7891 | | Finnish | 5677 | | Irish | 2743 | | Serbian | 1975 | | Romanian | 1544 | | Norwegian Nynorsk | 1398 | | Croatian | 1306 | | Norwegian | 1227 | | Icelandic | 902 | | Slovak | 840 | | Lithuanian | 714 | | Welsh | 580 | | Slovenian | 545 | | Indonesian | 418 | | Cornish | 223 | This breakdown was derived from the first language in the associated metadata field. Some books include multiple languages. Some of the languages codes for this data were also derived using computational methods. Therefore, the language fields in the dataset should be treated with some caution (discussed in more detail below). #### Language change The publication dates of books in the data cover a broad period of time (1500-1900). For languages in the dataset with broad temporal coverage, significant [language change](https://en.wikipedia.org/wiki/Language_change) might be found. The ability to study this change by taking reasonably large samples of languages covering different time periods is one of the opportunities offered by this dataset. The fact that the text in this dataset was produced via Optical Character Recognition (OCR) causes some challenges for this type of research (see below). #### Optical Character Recognition The digitised books in this collection were transformed into machine-readable text using Optical Character Recognition (OCR) software. The text produced via OCR software will usually include some errors. These errors include; mistakes at the character level; for example, an `i` is mistaken for an `l`, at the word level or across significant passages of text. The books in this dataset can pose some additional challenges for OCR software. OCR errors can stem from: - the quality of the original printing: printing technology was a developing technology during the time period covered by this corpus; some of the original book text will include misprints, blurred or faded ink that is hard to read - damage to the page: some of the books will have become damaged over time, this can obscure all or parts of the text on a page - poor quality scans: scanning books can be challenging; for example, if the book has tight bindings, it can be hard to capture text that has fallen into the [gutter](https://www.abaa.org/glossary/entry/gutter) of the book. - the language used in the books may differ from the languages OCR software is predominantly trained to recognise. ##### OCR word confidence Many OCR engines produce some form of confidence score alongside the predicted text. These confidence scores are usually at the character or word level. The word confidence score was given for each word in the original ALTO XML versions of the text in this dataset in this dataset. The OCR confidence scores should be treated with some scepticism. For historical text or in a lower resource language, for example, a low confidence score may be more likely for words not included in a modern dictionary but may be accurate transcriptions of the original text. With that said, the confidence scores do give some sense of the OCR quality. An example of text with a high (over 90% mean word confidence score): ``` 8 direction to the Conduit, round which is a wide open space, and a good broad pavement called the Parade. It commands a pleasant peep of the slopes and terrace throughout its entire length. The street continuing from the Conduit, in the same general direction, was known anciently as Lodborne Lane, and is now named South Street. From the Conduit two other streets, at right angles to these, are Long Street, leading Eastwards, and Half-Moon Street (formerly Lodborne), leading to Westbury, Trendle Street, and the Horsecastles Road. ``` An example of text with a score below 40%: ``` Hannover. Schrift und Druck von Fr. CultniTmn,', "LeMNs'utluirui.", 'ü 8u«llim» M^äalßwi 01de!lop 1<M.', 'p^dnalmw vom Xr^u/e, lpiti>»**Kmm lie« !»^2!M kleine lii!<! (,«>* ttünee!<»e^ v»n tndzt Lievclum, 1872, ``` The quality of OCR - as measured by mean OCR confidence for a page - across the dataset correlates with other features. A groupby of publication decade and mean word confidence: | decade | mean_wc_ocr | | ------ | ----------- | | 1510 | 0.499151 | | 1520 | 0.544818 | | 1540 | 0.511589 | | 1550 | 0.4505 | | 1580 | 0.321858 | | 1590 | 0.461282 | | 1600 | 0.467318 | | 1610 | 0.495895 | | 1620 | 0.501257 | | 1630 | 0.49766 | | 1640 | 0.512095 | | 1650 | 0.528534 | | 1660 | 0.521014 | | 1670 | 0.592575 | | 1680 | 0.583901 | | 1690 | 0.567202 | | 1700 | 0.575175 | | 1710 | 0.61436 | | 1720 | 0.627725 | | 1730 | 0.658534 | | 1740 | 0.64214 | | 1750 | 0.657357 | | 1760 | 0.6389 | | 1770 | 0.651883 | | 1780 | 0.632326 | | 1790 | 0.664279 | | 1800 | 0.682338 | | 1810 | 0.708915 | | 1820 | 0.730015 | | 1830 | 0.730973 | | 1840 | 0.713886 | | 1850 | 0.697106 | | 1860 | 0.696701 | | 1870 | 0.717233 | | 1880 | 0.733331 | | 1890 | 0.762364 | As might be expected, the earlier periods have lower mean word confidence scores. Again, all of this should be treated with some scepticism, especially as the size of the data grows over time. As with time, the mean word confidence of the OCR software varies across languages: | Language_1 | mean_wc_ocr | | --------------------- | ----------- | | Croatian | 0.755565 | | Welsh | 0.7528 | | Norwegian Nynorsk | 0.751648 | | Slovenian | 0.746007 | | French | 0.740772 | | Finnish | 0.738032 | | Czech | 0.737849 | | Hungarian | 0.736076 | | Dutch | 0.734977 | | Cornish | 0.733682 | | Danish | 0.733106 | | English | 0.733037 | | Irish | 0.732658 | | Portuguese | 0.727746 | | Spanish | 0.725111 | | Icelandic | 0.724427 | | Italian | 0.715839 | | Swedish | 0.715633 | | Polish | 0.715133 | | Lithuanian | 0.700003 | | Bulgarian | 0.694657 | | Romanian | 0.692957 | | Latin | 0.689022 | | Russian | 0.685847 | | Serbian | 0.674329 | | Slovak | 0.66739 | | Greek, Modern (1453-) | 0.632195 | | German | 0.631457 | | Indonesian | 0.6155 | | Norwegian | 0.597987 | Again, these numbers should be treated sceptically since some languages appear very infrequently. For example, the above table suggests the mean word confidence for Welsh is relatively high. However, there isn’t much Welsh in the dataset. Therefore, it is unlikely that this data will be particularly useful for training (historic) Welsh language models. [More Information Needed] ## Dataset Structure The dataset has a number of configurations relating to the different dates of publication in the underlying data: - `1500_1899`: this configuration covers all years - `1800_1899`: this configuration covers the years between 1800 and 1899 - `1700_1799`: this configuration covers the years between 1700 and 1799 - `1510_1699`: this configuration covers the years between 1510 and 1699 ### Configuration option All of the configurations have an optional keyword argument `skip_empty_pages` which is set to `True` by default. The underlying dataset includes some pages where there is no text. This could either be because the underlying book page didn't have any text or the OCR software failed to detect this text. For many uses of this dataset it doesn't make sense to include empty pages so these are skipped by default. However, for some uses you may prefer to retain a representation of the data that includes these empty pages. Passing `skip_empty_pages=False` when loading the dataset will enable this option. ### Data Instances An example data instance: ```python {'Country of publication 1': 'England', 'Language_1': 'English', 'Language_2': None, 'Language_3': None, 'Language_4': None, 'Physical description': None, 'Publisher': None, 'all Countries of publication': 'England', 'all names': 'Settle, Elkanah [person]', 'date': 1689, 'empty_pg': True, 'mean_wc_ocr': 0.0, 'multi_language': False, 'name': 'Settle, Elkanah', 'pg': 1, 'place': 'London', 'raw_date': '1689', 'record_id': '001876770', 'std_wc_ocr': 0.0, 'text': None, ‘title’: ‘The Female Prelate: being the history and the life and death of Pope Joan. A tragedy [in five acts and in verse] . Written by a Person of Quality [i.e. Elkanah Settle]’} ``` Each instance in the dataset represents a single page from an original digitised book. ### Data Fields Included in this dataset are: | Field | Data Type | Description | | ---------------------------- | --------- | ------------------------------------------------------------------------------------------------------------- | | record_id | string | British Library ID for the item | | date | int | parsed/normalised year for the item. i.e. 1850 | | raw_date | string | the original raw date for an item i.e. 1850- | | title | string | title of the book | | place | string | Place of publication, i.e. London | | empty_pg | bool | whether page contains text | | text | string | OCR generated text for a page | | pg | int | page in original book the instance refers to | | mean_wc_ocr | float | mean word confidence values for the page | | std_wc_ocr | float | standard deviation of the word confidence values for the page | | name | string | name associated with the item (usually author) | | all names | string | all names associated with a publication | | Publisher | string | publisher of the book | | Country of publication 1 | string | first country associated with publication | | all Countries of publication | string | all countries associated with a publication | | Physical description | string | physical description of the item (size). This requires some normalisation before use and isn’t always present | | Language_1 | string | first language associated with the book, this is usually present | | Language_2 | string | | | Language_3 | string | | | Language_4 | string | | | multi_language | bool | | Some of these fields are not populated a large proportion of the time. You can get some sense of this from this [Pandas Profiling](https://github.com/pandas-profiling/pandas-profiling) [report](https://davanstrien.github.io/BL-datasets-pandas-profile-reports/pandas_profile_report_MS_digitised_books_2021-01-09.html) The majority of these fields relate to metadata about the books. Most of these fields were created by staff working for the British Library. The notable exception is the “Languages” fields that have sometimes been determined using computational methods. This work is reported in more detail in [Automated Language Identification of Bibliographic Resources](https://doi.org/10.1080/01639374.2019.1700201). It is important to note that metadata is neither perfect nor static. The metadata associated with this book was generated based on export from the British Library catalogue in 2021. [More Information Needed] ### Data Splits This dataset contains a single split `train`. ## Dataset Creation **Note** this section is a work in progress. ### Curation Rationale The books in this collection were digitised as part of a project partnership between the British Library and Microsoft. [Mass digitisation](https://en.wikipedia.org/wiki/Category:Mass_digitization), i.e. projects intending to quickly digitise large volumes of materials shape the selection of materials to include in several ways. Some considerations which are often involved in the decision of whether to include items for digitisation include (but are not limited to): - copyright status - preservation needs - the size of an item, very large and very small items are often hard to digitise quickly These criteria can have knock-on effects on the makeup of a collection. For example, systematically excluding large books may result in some types of book content not being digitised. Large volumes are likely to be correlated to content to at least some extent, so excluding them from digitisation will mean that material is underrepresented. Similarly, copyright status is often (but not only) determined by publication date. This can often lead to a rapid fall in the number of items in a collection after a certain cut-off date. All of the above is largely to make clear that this collection was not curated to create a representative sample of the British Library’s holdings. Some material will be over-represented, and others under-represented. Similarly, the collection should not be considered a representative sample of what was published across the period covered by the dataset (nor that the relative proportions of the data for each time period represent a proportional sample of publications from that period). Finally, and this probably does not need stating, the language included in the text should not be considered representative of either written or spoken language(s) from that time period. [More Information Needed] ### Source Data The source data (physical items) includes a variety of resources (predominantly monographs) held by the [British Library](bl.uk/](https://bl.uk/). The British Library is a [Legal Deposit](https://www.bl.uk/legal-deposit/about-legal-deposit) library. “Legal deposit requires publishers to provide a copy of every work they publish in the UK to the British Library. It’s existed in English law since 1662.” [source](https://www.bl.uk/legal-deposit/about-legal-deposit). The source data for this version of the data is derived from the original ALTO XML files and a recent metadata export #TODO add links [More Information Needed] #### Initial Data Collection and Normalization This version of the dataset was created using the original ALTO XML files and, where a match was found, updating the metadata associated with that item with more recent metadata using an export from the British Library catalogue. The process of creating this new dataset is documented here #TODO add link. There are a few decisions made in the above processing steps worth highlighting in particular: ##### Date normalization The metadata around date of publication for an item is not always exact. It often is represented as a date range e.g. `1850-1860`. The `date` field above takes steps to normalise this date to a single integer value. In most cases, this is taking the mean of the values associated with the item. The `raw_date` field includes the unprocessed date string. ##### Metadata included The metadata associated with each item includes most of the fields available via the ALTO XML. However, the data doesn’t include some metadata fields from the metadata export file. The reason fields were excluded because they are frequently not populated. A cut off of 50% was chosen, i.e. values from the metadata which are missing above 50% of the time were not included. This is slightly arbitrary, but since the aim of this version of the data was to support computational research using the collection it was felt that these fields with frequent missing values would be less valuable. #### Who are the source language producers? [More Information Needed] ### Annotations This dataset does not include annotations as usually understood in the context of NLP. The data does include metadata associated with the books. #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data There a range of considerations around using the data. These include the representativeness of the dataset, the OCR quality and the language used. Depending on your use case, these may be more or less important. For example, the impact of OCR quality on downstream tasks will depend on the target task. It may also be possible to mitigate this negative impact from OCR through tokenizer choice, Language Model training objectives, oversampling high-quality OCR, etc. [More Information Needed] ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases The text in this collection is derived from historical text. As a result, the text will reflect this time period's social beliefs and attitudes. The books include both fiction and non-fiction books. Examples of book titles that appear in the data (these are randomly sampled from all titles): - ‘Rhymes and Dreams, Legends of Pendle Forest, and other poems’, - “Précis of Information concerning the Zulu Country, with a map. Prepared in the Intelligence Branch of the Quarter-Master-General’s Department, Horse Guards, War Office, etc”, - ‘The fan. A poem’, - ‘Grif; a story of Australian Life’, - ‘Calypso; a masque: in three acts, etc’, - ‘Tales Uncle told [With illustrative woodcuts.]’, - 'Questings', - 'Home Life on an Ostrich Farm. With ... illustrations’, - ‘Bulgarya i Bulgarowie’, - 'Εἰς τα βαθη της Ἀφρικης [In darkest Africa.] ... Μεταφρασις Γεωρ. Σ. Βουτσινα, etc', - ‘The Corsair, a tale’, ‘Poems ... With notes [With a portrait.]’, - ‘Report of the Librarian for the year 1898 (1899, 1901, 1909)’, - “The World of Thought. A novel. By the author of ‘Before I began to speak.’”, - 'Amleto; tragedia ... recata in versi italiani da M. Leoni, etc'] While using titles alone is insufficient to integrate bias in this collection, it gives some insight into the topics covered by books. Further, the tiles highlight some particular types of bias we might find in the collection. This should in no way be considered an exhaustive list. #### Colonialism Even in the above random sample of titles examples of colonial attitudes, we can see examples of titles. We can try and interrogate this further by searching for the name of places that were part of the British Empire when many of these books were published. Searching for the string `India` in the titles and randomly sampling 10 titles returns: - “Travels in India in the Seventeenth Century: by Sir Thomas Roe and Dr. John Fryer. Reprinted from the ‘Calcutta Weekly Englishman.’”, - ‘A Winter in India and Malaysia among the Methodist Missions’, - “The Tourist’s Guide to all the principal stations on the railways of Northern India [By W. W.] ... Fifth edition”, - ‘Records of Sport and Military Life in Western India ... With an introduction by ... G. B. Malleson’, - "Lakhmi, the Rájpút's Bride. A tale of Gujarát in Western India [A poem.]”, - ‘The West India Commonplace Book: compiled from parliamentary and official documents; shewing the interest of Great Britain in its Sugar Colonies’, - “From Tonkin to India : by the sources of the Irawadi, January’ 95-January ’96”, - ‘Case of the Ameers of Sinde : speeches of Mr. John Sullivan, and Captain William Eastwick, at a special court held at the India House, ... 26th January, 1844’, - ‘The Andaman Islands; their colonisation, etc. A correspondence addressed to the India Office’, - ‘Ancient India as described by Ptolemy; being a translation of the chapters which describe India and Eastern Asia in the treatise on Geography written by Klaudios Ptolemaios ... with introduction, commentary, map of India according to Ptolemy, and ... index, by J. W. McCrindle’] Searching form the string `Africa` in the titles and randomly sampling 10 titles returns: - ['De Benguella ás Terras de Iácca. Descripção de uma viagem na Africa Central e Occidental ... Expedição organisada nos annos de 1877-1880. Edição illustrada', - ‘To the New Geographical Society of Edinburgh [An address on Africa by H. M. Stanley.]’, - ‘Diamonds and Gold in South Africa ... With maps, etc’, - ‘Missionary Travels and Researches in South Africa ... With notes by F. S. Arnot. With map and illustrations. New edition’, - ‘A Narrative of a Visit to the Mauritius and South Africa ... Illustrated by two maps, sixteen etchings and twenty-eight wood-cuts’, - ‘Side Lights on South Africa ... With a map, etc’, - ‘My Second Journey through Equatorial Africa ... in ... 1886 and 1887 ... Translated ... by M. J. A. Bergmann. With a map ... and ... illustrations, etc’, - ‘Missionary Travels and Researches in South Africa ... With portrait and fullpage illustrations’, - ‘[African sketches.] Narrative of a residence in South Africa ... A new edition. To which is prefixed a biographical sketch of the author by J. Conder’, - ‘Lake Ngami; or, Explorations and discoveries during four years wandering in the wilds of South Western Africa ... With a map, and numerous illustrations, etc’] [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The books are licensed under the [CC Public Domain Mark 1.0](https://creativecommons.org/publicdomain/mark/1.0/) license. ### Citation Information ```bibtext @misc{bBritishLibraryBooks2021, author = {British Library Labs}, title = {Digitised Books. c. 1510 - c. 1900. JSONL (OCR derived text + metadata)}, year = {2021}, publisher = {British Library}, howpublished={https://doi.org/10.23636/r7w6-zy15} ``` ### Contributions Thanks to [@davanstrien](https://github.com/davanstrien) for adding this dataset.
TheBritishLibrary/blbooks
[ "task_categories:text-generation", "task_categories:fill-mask", "task_categories:other", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:machine-generated", "multilinguality:multilingual", "size_categories:100K<n<1M", "source_datasets:original", "language:de", "language:en", "language:es", "language:fr", "language:it", "language:nl", "license:cc0-1.0", "digital-humanities-research", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["machine-generated"], "language": ["de", "en", "es", "fr", "it", "nl"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask", "other"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "British Library Books", "tags": ["digital-humanities-research"], "dataset_info": [{"config_name": "all", "features": [{"name": "record_id", "dtype": "string"}, {"name": "date", "dtype": "int32"}, {"name": "raw_date", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "place", "dtype": "string"}, {"name": "empty_pg", "dtype": "bool"}, {"name": "text", "dtype": "string"}, {"name": "pg", "dtype": "int32"}, {"name": "mean_wc_ocr", "dtype": "float32"}, {"name": "std_wc_ocr", "dtype": "float64"}, {"name": "name", "dtype": "string"}, {"name": "all_names", "dtype": "string"}, {"name": "Publisher", "dtype": "string"}, {"name": "Country of publication 1", "dtype": "string"}, {"name": "all Countries of publication", "dtype": "string"}, {"name": "Physical description", "dtype": "string"}, {"name": "Language_1", "dtype": "string"}, {"name": "Language_2", "dtype": "string"}, {"name": "Language_3", "dtype": "string"}, {"name": "Language_4", "dtype": "string"}, {"name": "multi_language", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 30394267732, "num_examples": 14011953}], "download_size": 10486035662, "dataset_size": 30394267732}, {"config_name": "1800s", "features": [{"name": "record_id", "dtype": "string"}, {"name": "date", "dtype": "int32"}, {"name": "raw_date", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "place", "dtype": "string"}, {"name": "empty_pg", "dtype": "bool"}, {"name": "text", "dtype": "string"}, {"name": "pg", "dtype": "int32"}, {"name": "mean_wc_ocr", "dtype": "float32"}, {"name": "std_wc_ocr", "dtype": "float64"}, {"name": "name", "dtype": "string"}, {"name": "all_names", "dtype": "string"}, {"name": "Publisher", "dtype": "string"}, {"name": "Country of publication 1", "dtype": "string"}, {"name": "all Countries of publication", "dtype": "string"}, {"name": "Physical description", "dtype": "string"}, {"name": "Language_1", "dtype": "string"}, {"name": "Language_2", "dtype": "string"}, {"name": "Language_3", "dtype": "string"}, {"name": "Language_4", "dtype": "string"}, {"name": "multi_language", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 30020434670, "num_examples": 13781747}], "download_size": 10348577602, "dataset_size": 30020434670}, {"config_name": "1700s", "features": [{"name": "record_id", "dtype": "string"}, {"name": "date", "dtype": "int32"}, {"name": "raw_date", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "place", "dtype": "string"}, {"name": "empty_pg", "dtype": "bool"}, {"name": "text", "dtype": "string"}, {"name": "pg", "dtype": "int32"}, {"name": "mean_wc_ocr", "dtype": "float32"}, {"name": "std_wc_ocr", "dtype": "float64"}, {"name": "name", "dtype": "string"}, {"name": "all_names", "dtype": "string"}, {"name": "Publisher", "dtype": "string"}, {"name": "Country of publication 1", "dtype": "string"}, {"name": "all Countries of publication", "dtype": "string"}, {"name": "Physical description", "dtype": "string"}, {"name": "Language_1", "dtype": "string"}, {"name": "Language_2", "dtype": "string"}, {"name": "Language_3", "dtype": "string"}, {"name": "Language_4", "dtype": "string"}, {"name": "multi_language", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 266382657, "num_examples": 178224}], "download_size": 95137895, "dataset_size": 266382657}, {"config_name": "1510_1699", "features": [{"name": "record_id", "dtype": "string"}, {"name": "date", "dtype": "timestamp[s]"}, {"name": "raw_date", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "place", "dtype": "string"}, {"name": "empty_pg", "dtype": "bool"}, {"name": "text", "dtype": "string"}, {"name": "pg", "dtype": "int32"}, {"name": "mean_wc_ocr", "dtype": "float32"}, {"name": "std_wc_ocr", "dtype": "float64"}, {"name": "name", "dtype": "string"}, {"name": "all_names", "dtype": "string"}, {"name": "Publisher", "dtype": "string"}, {"name": "Country of publication 1", "dtype": "string"}, {"name": "all Countries of publication", "dtype": "string"}, {"name": "Physical description", "dtype": "string"}, {"name": "Language_1", "dtype": "string"}, {"name": "Language_2", "dtype": "string"}, {"name": "Language_3", "dtype": "string"}, {"name": "Language_4", "dtype": "string"}, {"name": "multi_language", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 107667469, "num_examples": 51982}], "download_size": 42320165, "dataset_size": 107667469}, {"config_name": "1500_1899", "features": [{"name": "record_id", "dtype": "string"}, {"name": "date", "dtype": "timestamp[s]"}, {"name": "raw_date", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "place", "dtype": "string"}, {"name": "empty_pg", "dtype": "bool"}, {"name": "text", "dtype": "string"}, {"name": "pg", "dtype": "int32"}, {"name": "mean_wc_ocr", "dtype": "float32"}, {"name": "std_wc_ocr", "dtype": "float64"}, {"name": "name", "dtype": "string"}, {"name": "all_names", "dtype": "string"}, {"name": "Publisher", "dtype": "string"}, {"name": "Country of publication 1", "dtype": "string"}, {"name": "all Countries of publication", "dtype": "string"}, {"name": "Physical description", "dtype": "string"}, {"name": "Language_1", "dtype": "string"}, {"name": "Language_2", "dtype": "string"}, {"name": "Language_3", "dtype": "string"}, {"name": "Language_4", "dtype": "string"}, {"name": "multi_language", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 30452067039, "num_examples": 14011953}], "download_size": 10486035662, "dataset_size": 30452067039}, {"config_name": "1800_1899", "features": [{"name": "record_id", "dtype": "string"}, {"name": "date", "dtype": "timestamp[s]"}, {"name": "raw_date", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "place", "dtype": "string"}, {"name": "empty_pg", "dtype": "bool"}, {"name": "text", "dtype": "string"}, {"name": "pg", "dtype": "int32"}, {"name": "mean_wc_ocr", "dtype": "float32"}, {"name": "std_wc_ocr", "dtype": "float64"}, {"name": "name", "dtype": "string"}, {"name": "all_names", "dtype": "string"}, {"name": "Publisher", "dtype": "string"}, {"name": "Country of publication 1", "dtype": "string"}, {"name": "all Countries of publication", "dtype": "string"}, {"name": "Physical description", "dtype": "string"}, {"name": "Language_1", "dtype": "string"}, {"name": "Language_2", "dtype": "string"}, {"name": "Language_3", "dtype": "string"}, {"name": "Language_4", "dtype": "string"}, {"name": "multi_language", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 30077284377, "num_examples": 13781747}], "download_size": 10348577602, "dataset_size": 30077284377}, {"config_name": "1700_1799", "features": [{"name": "record_id", "dtype": "string"}, {"name": "date", "dtype": "timestamp[s]"}, {"name": "raw_date", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "place", "dtype": "string"}, {"name": "empty_pg", "dtype": "bool"}, {"name": "text", "dtype": "string"}, {"name": "pg", "dtype": "int32"}, {"name": "mean_wc_ocr", "dtype": "float32"}, {"name": "std_wc_ocr", "dtype": "float64"}, {"name": "name", "dtype": "string"}, {"name": "all_names", "dtype": "string"}, {"name": "Publisher", "dtype": "string"}, {"name": "Country of publication 1", "dtype": "string"}, {"name": "all Countries of publication", "dtype": "string"}, {"name": "Physical description", "dtype": "string"}, {"name": "Language_1", "dtype": "string"}, {"name": "Language_2", "dtype": "string"}, {"name": "Language_3", "dtype": "string"}, {"name": "Language_4", "dtype": "string"}, {"name": "multi_language", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 267117831, "num_examples": 178224}], "download_size": 95137895, "dataset_size": 267117831}]}
2022-11-03T16:31:29+00:00
[]
[ "de", "en", "es", "fr", "it", "nl" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #task_categories-other #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-machine-generated #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-original #language-German #language-English #language-Spanish #language-French #language-Italian #language-Dutch #license-cc0-1.0 #digital-humanities-research #region-us
Dataset Card for British Library Books ====================================== Table of Contents ----------------- * Dataset Card for British Library Books + Table of Contents + Dataset Description - Dataset Summary - Supported Tasks and Leaderboards * Language model training * Supervised tasks - Languages * Language change * Optical Character Recognition + OCR word confidence + Dataset Structure - Data Instances - Data Fields - Data Splits + Dataset Creation - Curation Rationale - Source Data * Initial Data Collection and Normalization + Date normalization + Metadata included * Who are the source language producers? - Annotations * Annotation process * Who are the annotators? - Personal and Sensitive Information + Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases * Colonialism - Other Known Limitations + Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: * Leaderboard: * Point of Contact: labs@URL ### Dataset Summary This dataset consists of books digitised by the British Library in partnership with Microsoft. The dataset includes ~25 million pages of out of copyright texts. The majority of the texts were published in the 18th and 19th Century, but the collection also consists of a smaller number of books from earlier periods. Items within this collection cover a wide range of subject areas, including geography, philosophy, history, poetry and literature and are published in various languages. While the books are predominately from the 18th and 19th Centuries, there are fewer books from earlier periods. The number of pages in the corpus by decade: ### Supported Tasks and Leaderboards This collection has been previously used across various digital history and humanities projects since being published. The dataset consists of text and a range of metadata associated with this text. This metadata includes: * date of publication * place of publication * country of publication * language * OCR quality * physical description of the original physical item #### Language model training As a relatively large dataset, 'blbooks' provides a source dataset for training language models. The presence of this metadata also offers interesting opportunities to use this dataset as a source for training language models based on: * specific time-periods * specific languages * certain OCR quality thresholds The above is not an exhaustive list but offer some suggestions of how the dataset can be used to explore topics such as the impact of OCR quality on language models, the ‘transferability’ of language models across time or the impact of training multilingual language models on historical languages. #### Supervised tasks Whilst this dataset does not have annotations for a specific NLP task, such as Named Entity Recognition, it does include a wide variety of metadata. This metadata has the potential to be used for training and/or evaluating a variety of supervised tasks predicting this metadata. ### Languages This dataset consists of books published in several languages. The breakdown of the languages included (at the page level) is: This breakdown was derived from the first language in the associated metadata field. Some books include multiple languages. Some of the languages codes for this data were also derived using computational methods. Therefore, the language fields in the dataset should be treated with some caution (discussed in more detail below). #### Language change The publication dates of books in the data cover a broad period of time (1500-1900). For languages in the dataset with broad temporal coverage, significant language change might be found. The ability to study this change by taking reasonably large samples of languages covering different time periods is one of the opportunities offered by this dataset. The fact that the text in this dataset was produced via Optical Character Recognition (OCR) causes some challenges for this type of research (see below). #### Optical Character Recognition The digitised books in this collection were transformed into machine-readable text using Optical Character Recognition (OCR) software. The text produced via OCR software will usually include some errors. These errors include; mistakes at the character level; for example, an 'i' is mistaken for an 'l', at the word level or across significant passages of text. The books in this dataset can pose some additional challenges for OCR software. OCR errors can stem from: * the quality of the original printing: printing technology was a developing technology during the time period covered by this corpus; some of the original book text will include misprints, blurred or faded ink that is hard to read * damage to the page: some of the books will have become damaged over time, this can obscure all or parts of the text on a page * poor quality scans: scanning books can be challenging; for example, if the book has tight bindings, it can be hard to capture text that has fallen into the gutter of the book. * the language used in the books may differ from the languages OCR software is predominantly trained to recognise. ##### OCR word confidence Many OCR engines produce some form of confidence score alongside the predicted text. These confidence scores are usually at the character or word level. The word confidence score was given for each word in the original ALTO XML versions of the text in this dataset in this dataset. The OCR confidence scores should be treated with some scepticism. For historical text or in a lower resource language, for example, a low confidence score may be more likely for words not included in a modern dictionary but may be accurate transcriptions of the original text. With that said, the confidence scores do give some sense of the OCR quality. An example of text with a high (over 90% mean word confidence score): An example of text with a score below 40%: The quality of OCR - as measured by mean OCR confidence for a page - across the dataset correlates with other features. A groupby of publication decade and mean word confidence: As might be expected, the earlier periods have lower mean word confidence scores. Again, all of this should be treated with some scepticism, especially as the size of the data grows over time. As with time, the mean word confidence of the OCR software varies across languages: Again, these numbers should be treated sceptically since some languages appear very infrequently. For example, the above table suggests the mean word confidence for Welsh is relatively high. However, there isn’t much Welsh in the dataset. Therefore, it is unlikely that this data will be particularly useful for training (historic) Welsh language models. Dataset Structure ----------------- The dataset has a number of configurations relating to the different dates of publication in the underlying data: * '1500\_1899': this configuration covers all years * '1800\_1899': this configuration covers the years between 1800 and 1899 * '1700\_1799': this configuration covers the years between 1700 and 1799 * '1510\_1699': this configuration covers the years between 1510 and 1699 ### Configuration option All of the configurations have an optional keyword argument 'skip\_empty\_pages' which is set to 'True' by default. The underlying dataset includes some pages where there is no text. This could either be because the underlying book page didn't have any text or the OCR software failed to detect this text. For many uses of this dataset it doesn't make sense to include empty pages so these are skipped by default. However, for some uses you may prefer to retain a representation of the data that includes these empty pages. Passing 'skip\_empty\_pages=False' when loading the dataset will enable this option. ### Data Instances An example data instance: Each instance in the dataset represents a single page from an original digitised book. ### Data Fields Included in this dataset are: Field: record\_id, Data Type: string, Description: British Library ID for the item Field: date, Data Type: int, Description: parsed/normalised year for the item. i.e. 1850 Field: raw\_date, Data Type: string, Description: the original raw date for an item i.e. 1850- Field: title, Data Type: string, Description: title of the book Field: place, Data Type: string, Description: Place of publication, i.e. London Field: empty\_pg, Data Type: bool, Description: whether page contains text Field: text, Data Type: string, Description: OCR generated text for a page Field: pg, Data Type: int, Description: page in original book the instance refers to Field: mean\_wc\_ocr, Data Type: float, Description: mean word confidence values for the page Field: std\_wc\_ocr, Data Type: float, Description: standard deviation of the word confidence values for the page Field: name, Data Type: string, Description: name associated with the item (usually author) Field: all names, Data Type: string, Description: all names associated with a publication Field: Publisher, Data Type: string, Description: publisher of the book Field: Country of publication 1, Data Type: string, Description: first country associated with publication Field: all Countries of publication, Data Type: string, Description: all countries associated with a publication Field: Physical description, Data Type: string, Description: physical description of the item (size). This requires some normalisation before use and isn’t always present Field: Language\_1, Data Type: string, Description: first language associated with the book, this is usually present Field: Language\_2, Data Type: string, Description: Field: Language\_3, Data Type: string, Description: Field: Language\_4, Data Type: string, Description: Field: multi\_language, Data Type: bool, Description: Some of these fields are not populated a large proportion of the time. You can get some sense of this from this Pandas Profiling report The majority of these fields relate to metadata about the books. Most of these fields were created by staff working for the British Library. The notable exception is the “Languages” fields that have sometimes been determined using computational methods. This work is reported in more detail in Automated Language Identification of Bibliographic Resources. It is important to note that metadata is neither perfect nor static. The metadata associated with this book was generated based on export from the British Library catalogue in 2021. ### Data Splits This dataset contains a single split 'train'. Dataset Creation ---------------- Note this section is a work in progress. ### Curation Rationale The books in this collection were digitised as part of a project partnership between the British Library and Microsoft. Mass digitisation, i.e. projects intending to quickly digitise large volumes of materials shape the selection of materials to include in several ways. Some considerations which are often involved in the decision of whether to include items for digitisation include (but are not limited to): * copyright status * preservation needs * the size of an item, very large and very small items are often hard to digitise quickly These criteria can have knock-on effects on the makeup of a collection. For example, systematically excluding large books may result in some types of book content not being digitised. Large volumes are likely to be correlated to content to at least some extent, so excluding them from digitisation will mean that material is underrepresented. Similarly, copyright status is often (but not only) determined by publication date. This can often lead to a rapid fall in the number of items in a collection after a certain cut-off date. All of the above is largely to make clear that this collection was not curated to create a representative sample of the British Library’s holdings. Some material will be over-represented, and others under-represented. Similarly, the collection should not be considered a representative sample of what was published across the period covered by the dataset (nor that the relative proportions of the data for each time period represent a proportional sample of publications from that period). Finally, and this probably does not need stating, the language included in the text should not be considered representative of either written or spoken language(s) from that time period. ### Source Data The source data (physical items) includes a variety of resources (predominantly monographs) held by the British Library. The British Library is a Legal Deposit library. “Legal deposit requires publishers to provide a copy of every work they publish in the UK to the British Library. It’s existed in English law since 1662.” source. The source data for this version of the data is derived from the original ALTO XML files and a recent metadata export #TODO add links #### Initial Data Collection and Normalization This version of the dataset was created using the original ALTO XML files and, where a match was found, updating the metadata associated with that item with more recent metadata using an export from the British Library catalogue. The process of creating this new dataset is documented here #TODO add link. There are a few decisions made in the above processing steps worth highlighting in particular: ##### Date normalization The metadata around date of publication for an item is not always exact. It often is represented as a date range e.g. '1850-1860'. The 'date' field above takes steps to normalise this date to a single integer value. In most cases, this is taking the mean of the values associated with the item. The 'raw\_date' field includes the unprocessed date string. ##### Metadata included The metadata associated with each item includes most of the fields available via the ALTO XML. However, the data doesn’t include some metadata fields from the metadata export file. The reason fields were excluded because they are frequently not populated. A cut off of 50% was chosen, i.e. values from the metadata which are missing above 50% of the time were not included. This is slightly arbitrary, but since the aim of this version of the data was to support computational research using the collection it was felt that these fields with frequent missing values would be less valuable. #### Who are the source language producers? ### Annotations This dataset does not include annotations as usually understood in the context of NLP. The data does include metadata associated with the books. #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- There a range of considerations around using the data. These include the representativeness of the dataset, the OCR quality and the language used. Depending on your use case, these may be more or less important. For example, the impact of OCR quality on downstream tasks will depend on the target task. It may also be possible to mitigate this negative impact from OCR through tokenizer choice, Language Model training objectives, oversampling high-quality OCR, etc. ### Social Impact of Dataset ### Discussion of Biases The text in this collection is derived from historical text. As a result, the text will reflect this time period's social beliefs and attitudes. The books include both fiction and non-fiction books. Examples of book titles that appear in the data (these are randomly sampled from all titles): * ‘Rhymes and Dreams, Legends of Pendle Forest, and other poems’, * “Précis of Information concerning the Zulu Country, with a map. Prepared in the Intelligence Branch of the Quarter-Master-General’s Department, Horse Guards, War Office, etc”, * ‘The fan. A poem’, * ‘Grif; a story of Australian Life’, * ‘Calypso; a masque: in three acts, etc’, * ‘Tales Uncle told [With illustrative woodcuts.]’, * 'Questings', * 'Home Life on an Ostrich Farm. With ... illustrations’, * ‘Bulgarya i Bulgarowie’, * 'Εἰς τα βαθη της Ἀφρικης [In darkest Africa.] ... Μεταφρασις Γεωρ. Σ. Βουτσινα, etc', * ‘The Corsair, a tale’, ‘Poems ... With notes [With a portrait.]’, * ‘Report of the Librarian for the year 1898 (1899, 1901, 1909)’, * “The World of Thought. A novel. By the author of ‘Before I began to speak.’”, * 'Amleto; tragedia ... recata in versi italiani da M. Leoni, etc'] While using titles alone is insufficient to integrate bias in this collection, it gives some insight into the topics covered by books. Further, the tiles highlight some particular types of bias we might find in the collection. This should in no way be considered an exhaustive list. #### Colonialism Even in the above random sample of titles examples of colonial attitudes, we can see examples of titles. We can try and interrogate this further by searching for the name of places that were part of the British Empire when many of these books were published. Searching for the string 'India' in the titles and randomly sampling 10 titles returns: * “Travels in India in the Seventeenth Century: by Sir Thomas Roe and Dr. John Fryer. Reprinted from the ‘Calcutta Weekly Englishman.’”, * ‘A Winter in India and Malaysia among the Methodist Missions’, * “The Tourist’s Guide to all the principal stations on the railways of Northern India [By W. W.] ... Fifth edition”, * ‘Records of Sport and Military Life in Western India ... With an introduction by ... G. B. Malleson’, * "Lakhmi, the Rájpút's Bride. A tale of Gujarát in Western India [A poem.]”, * ‘The West India Commonplace Book: compiled from parliamentary and official documents; shewing the interest of Great Britain in its Sugar Colonies’, * “From Tonkin to India : by the sources of the Irawadi, January’ 95-January ’96”, * ‘Case of the Ameers of Sinde : speeches of Mr. John Sullivan, and Captain William Eastwick, at a special court held at the India House, ... 26th January, 1844’, * ‘The Andaman Islands; their colonisation, etc. A correspondence addressed to the India Office’, * ‘Ancient India as described by Ptolemy; being a translation of the chapters which describe India and Eastern Asia in the treatise on Geography written by Klaudios Ptolemaios ... with introduction, commentary, map of India according to Ptolemy, and ... index, by J. W. McCrindle’] Searching form the string 'Africa' in the titles and randomly sampling 10 titles returns: * ['De Benguella ás Terras de Iácca. Descripção de uma viagem na Africa Central e Occidental ... Expedição organisada nos annos de 1877-1880. Edição illustrada', * ‘To the New Geographical Society of Edinburgh [An address on Africa by H. M. Stanley.]’, * ‘Diamonds and Gold in South Africa ... With maps, etc’, * ‘Missionary Travels and Researches in South Africa ... With notes by F. S. Arnot. With map and illustrations. New edition’, * ‘A Narrative of a Visit to the Mauritius and South Africa ... Illustrated by two maps, sixteen etchings and twenty-eight wood-cuts’, * ‘Side Lights on South Africa ... With a map, etc’, * ‘My Second Journey through Equatorial Africa ... in ... 1886 and 1887 ... Translated ... by M. J. A. Bergmann. With a map ... and ... illustrations, etc’, * ‘Missionary Travels and Researches in South Africa ... With portrait and fullpage illustrations’, * ‘[African sketches.] Narrative of a residence in South Africa ... A new edition. To which is prefixed a biographical sketch of the author by J. Conder’, * ‘Lake Ngami; or, Explorations and discoveries during four years wandering in the wilds of South Western Africa ... With a map, and numerous illustrations, etc’] ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information The books are licensed under the CC Public Domain Mark 1.0 license. ### Contributions Thanks to @davanstrien for adding this dataset.
[ "### Dataset Summary\n\n\nThis dataset consists of books digitised by the British Library in partnership with Microsoft. The dataset includes ~25 million pages of out of copyright texts. The majority of the texts were published in the 18th and 19th Century, but the collection also consists of a smaller number of books from earlier periods. Items within this collection cover a wide range of subject areas, including geography, philosophy, history, poetry and literature and are published in various languages.\n\n\nWhile the books are predominately from the 18th and 19th Centuries, there are fewer books from earlier periods. The number of pages in the corpus by decade:", "### Supported Tasks and Leaderboards\n\n\nThis collection has been previously used across various digital history and humanities projects since being published.\n\n\nThe dataset consists of text and a range of metadata associated with this text. This metadata includes:\n\n\n* date of publication\n* place of publication\n* country of publication\n* language\n* OCR quality\n* physical description of the original physical item", "#### Language model training\n\n\nAs a relatively large dataset, 'blbooks' provides a source dataset for training language models. The presence of this metadata also offers interesting opportunities to use this dataset as a source for training language models based on:\n\n\n* specific time-periods\n* specific languages\n* certain OCR quality thresholds\n\n\nThe above is not an exhaustive list but offer some suggestions of how the dataset can be used to explore topics such as the impact of OCR quality on language models, the ‘transferability’ of language models across time or the impact of training multilingual language models on historical languages.", "#### Supervised tasks\n\n\nWhilst this dataset does not have annotations for a specific NLP task, such as Named Entity Recognition, it does include a wide variety of metadata. This metadata has the potential to be used for training and/or evaluating a variety of supervised tasks predicting this metadata.", "### Languages\n\n\nThis dataset consists of books published in several languages. The breakdown of the languages included (at the page level) is:\n\n\n\nThis breakdown was derived from the first language in the associated metadata field. Some books include multiple languages. Some of the languages codes for this data were also derived using computational methods. Therefore, the language fields in the dataset should be treated with some caution (discussed in more detail below).", "#### Language change\n\n\nThe publication dates of books in the data cover a broad period of time (1500-1900). For languages in the dataset with broad temporal coverage, significant language change might be found. The ability to study this change by taking reasonably large samples of languages covering different time periods is one of the opportunities offered by this dataset. The fact that the text in this dataset was produced via Optical Character Recognition (OCR) causes some challenges for this type of research (see below).", "#### Optical Character Recognition\n\n\nThe digitised books in this collection were transformed into machine-readable text using Optical Character Recognition (OCR) software. The text produced via OCR software will usually include some errors. These errors include; mistakes at the character level; for example, an 'i' is mistaken for an 'l', at the word level or across significant passages of text.\n\n\nThe books in this dataset can pose some additional challenges for OCR software. OCR errors can stem from:\n\n\n* the quality of the original printing: printing technology was a developing technology during the time period covered by this corpus; some of the original book text will include misprints, blurred or faded ink that is hard to read\n* damage to the page: some of the books will have become damaged over time, this can obscure all or parts of the text on a page\n* poor quality scans: scanning books can be challenging; for example, if the book has tight bindings, it can be hard to capture text that has fallen into the gutter of the book.\n* the language used in the books may differ from the languages OCR software is predominantly trained to recognise.", "##### OCR word confidence\n\n\nMany OCR engines produce some form of confidence score alongside the predicted text. These confidence scores are usually at the character or word level. The word confidence score was given for each word in the original ALTO XML versions of the text in this dataset in this dataset. The OCR confidence scores should be treated with some scepticism. For historical text or in a lower resource language, for example, a low confidence score may be more likely for words not included in a modern dictionary but may be accurate transcriptions of the original text. With that said, the confidence scores do give some sense of the OCR quality.\n\n\nAn example of text with a high (over 90% mean word confidence score):\n\n\nAn example of text with a score below 40%:\n\n\nThe quality of OCR - as measured by mean OCR confidence for a page - across the dataset correlates with other features. A groupby of publication decade and mean word confidence:\n\n\n\nAs might be expected, the earlier periods have lower mean word confidence scores. Again, all of this should be treated with some scepticism, especially as the size of the data grows over time.\n\n\nAs with time, the mean word confidence of the OCR software varies across languages:\n\n\n\nAgain, these numbers should be treated sceptically since some languages appear very infrequently. For example, the above table suggests the mean word confidence for Welsh is relatively high. However, there isn’t much Welsh in the dataset. Therefore, it is unlikely that this data will be particularly useful for training (historic) Welsh language models.\n\n\nDataset Structure\n-----------------\n\n\nThe dataset has a number of configurations relating to the different dates of publication in the underlying data:\n\n\n* '1500\\_1899': this configuration covers all years\n* '1800\\_1899': this configuration covers the years between 1800 and 1899\n* '1700\\_1799': this configuration covers the years between 1700 and 1799\n* '1510\\_1699': this configuration covers the years between 1510 and 1699", "### Configuration option\n\n\nAll of the configurations have an optional keyword argument 'skip\\_empty\\_pages' which is set to 'True' by default. The underlying dataset includes some pages where there is no text. This could either be because the underlying book page didn't have any text or the OCR software failed to detect this text.\n\n\nFor many uses of this dataset it doesn't make sense to include empty pages so these are skipped by default. However, for some uses you may prefer to retain a representation of the data that includes these empty pages. Passing 'skip\\_empty\\_pages=False' when loading the dataset will enable this option.", "### Data Instances\n\n\nAn example data instance:\n\n\nEach instance in the dataset represents a single page from an original digitised book.", "### Data Fields\n\n\nIncluded in this dataset are:\n\n\nField: record\\_id, Data Type: string, Description: British Library ID for the item\nField: date, Data Type: int, Description: parsed/normalised year for the item. i.e. 1850\nField: raw\\_date, Data Type: string, Description: the original raw date for an item i.e. 1850-\nField: title, Data Type: string, Description: title of the book\nField: place, Data Type: string, Description: Place of publication, i.e. London\nField: empty\\_pg, Data Type: bool, Description: whether page contains text\nField: text, Data Type: string, Description: OCR generated text for a page\nField: pg, Data Type: int, Description: page in original book the instance refers to\nField: mean\\_wc\\_ocr, Data Type: float, Description: mean word confidence values for the page\nField: std\\_wc\\_ocr, Data Type: float, Description: standard deviation of the word confidence values for the page\nField: name, Data Type: string, Description: name associated with the item (usually author)\nField: all names, Data Type: string, Description: all names associated with a publication\nField: Publisher, Data Type: string, Description: publisher of the book\nField: Country of publication 1, Data Type: string, Description: first country associated with publication\nField: all Countries of publication, Data Type: string, Description: all countries associated with a publication\nField: Physical description, Data Type: string, Description: physical description of the item (size). This requires some normalisation before use and isn’t always present\nField: Language\\_1, Data Type: string, Description: first language associated with the book, this is usually present\nField: Language\\_2, Data Type: string, Description: \nField: Language\\_3, Data Type: string, Description: \nField: Language\\_4, Data Type: string, Description: \nField: multi\\_language, Data Type: bool, Description: \n\n\nSome of these fields are not populated a large proportion of the time. You can get some sense of this from this Pandas Profiling report\n\n\nThe majority of these fields relate to metadata about the books. Most of these fields were created by staff working for the British Library. The notable exception is the “Languages” fields that have sometimes been determined using computational methods. This work is reported in more detail in Automated Language Identification of Bibliographic Resources. It is important to note that metadata is neither perfect nor static. The metadata associated with this book was generated based on export from the British Library catalogue in 2021.", "### Data Splits\n\n\nThis dataset contains a single split 'train'.\n\n\nDataset Creation\n----------------\n\n\nNote this section is a work in progress.", "### Curation Rationale\n\n\nThe books in this collection were digitised as part of a project partnership between the British Library and Microsoft. Mass digitisation, i.e. projects intending to quickly digitise large volumes of materials shape the selection of materials to include in several ways. Some considerations which are often involved in the decision of whether to include items for digitisation include (but are not limited to):\n\n\n* copyright status\n* preservation needs\n* the size of an item, very large and very small items are often hard to digitise quickly\n\n\nThese criteria can have knock-on effects on the makeup of a collection. For example, systematically excluding large books may result in some types of book content not being digitised. Large volumes are likely to be correlated to content to at least some extent, so excluding them from digitisation will mean that material is underrepresented. Similarly, copyright status is often (but not only) determined by publication date. This can often lead to a rapid fall in the number of items in a collection after a certain cut-off date.\n\n\nAll of the above is largely to make clear that this collection was not curated to create a representative sample of the British Library’s holdings. Some material will be over-represented, and others under-represented. Similarly, the collection should not be considered a representative sample of what was published across the period covered by the dataset (nor that the relative proportions of the data for each time period represent a proportional sample of publications from that period). Finally, and this probably does not need stating, the language included in the text should not be considered representative of either written or spoken language(s) from that time period.", "### Source Data\n\n\nThe source data (physical items) includes a variety of resources (predominantly monographs) held by the British Library. The British Library is a Legal Deposit library. “Legal deposit requires publishers to provide a copy of every work they publish in the UK to the British Library. It’s existed in English law since 1662.” source.\n\n\nThe source data for this version of the data is derived from the original ALTO XML files and a recent metadata export #TODO add links", "#### Initial Data Collection and Normalization\n\n\nThis version of the dataset was created using the original ALTO XML files and, where a match was found, updating the metadata associated with that item with more recent metadata using an export from the British Library catalogue. The process of creating this new dataset is documented here #TODO add link.\n\n\nThere are a few decisions made in the above processing steps worth highlighting in particular:", "##### Date normalization\n\n\nThe metadata around date of publication for an item is not always exact. It often is represented as a date range e.g. '1850-1860'. The 'date' field above takes steps to normalise this date to a single integer value. In most cases, this is taking the mean of the values associated with the item. The 'raw\\_date' field includes the unprocessed date string.", "##### Metadata included\n\n\nThe metadata associated with each item includes most of the fields available via the ALTO XML. However, the data doesn’t include some metadata fields from the metadata export file. The reason fields were excluded because they are frequently not populated. A cut off of 50% was chosen, i.e. values from the metadata which are missing above 50% of the time were not included. This is slightly arbitrary, but since the aim of this version of the data was to support computational research using the collection it was felt that these fields with frequent missing values would be less valuable.", "#### Who are the source language producers?", "### Annotations\n\n\nThis dataset does not include annotations as usually understood in the context of NLP. The data does include metadata associated with the books.", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThere a range of considerations around using the data. These include the representativeness of the dataset, the OCR quality and the language used. Depending on your use case, these may be more or less important. For example, the impact of OCR quality on downstream tasks will depend on the target task. It may also be possible to mitigate this negative impact from OCR through tokenizer choice, Language Model training objectives, oversampling high-quality OCR, etc.", "### Social Impact of Dataset", "### Discussion of Biases\n\n\nThe text in this collection is derived from historical text. As a result, the text will reflect this time period's social beliefs and attitudes. The books include both fiction and non-fiction books.\n\n\nExamples of book titles that appear in the data (these are randomly sampled from all titles):\n\n\n* ‘Rhymes and Dreams, Legends of Pendle Forest, and other poems’,\n* “Précis of Information concerning the Zulu Country, with a map. Prepared in the Intelligence Branch of the Quarter-Master-General’s Department, Horse Guards, War Office, etc”,\n* ‘The fan. A poem’,\n* ‘Grif; a story of Australian Life’,\n* ‘Calypso; a masque: in three acts, etc’,\n* ‘Tales Uncle told [With illustrative woodcuts.]’,\n* 'Questings',\n* 'Home Life on an Ostrich Farm. With ... illustrations’,\n* ‘Bulgarya i Bulgarowie’,\n* 'Εἰς τα βαθη της Ἀφρικης [In darkest Africa.] ... Μεταφρασις Γεωρ. Σ. Βουτσινα, etc',\n* ‘The Corsair, a tale’,\n‘Poems ... With notes [With a portrait.]’,\n* ‘Report of the Librarian for the year 1898 (1899, 1901, 1909)’,\n* “The World of Thought. A novel. By the author of ‘Before I began to speak.’”,\n* 'Amleto; tragedia ... recata in versi italiani da M. Leoni, etc']\n\n\nWhile using titles alone is insufficient to integrate bias in this collection, it gives some insight into the topics covered by books. Further, the tiles highlight some particular types of bias we might find in the collection. This should in no way be considered an exhaustive list.", "#### Colonialism\n\n\nEven in the above random sample of titles examples of colonial attitudes, we can see examples of titles. We can try and interrogate this further by searching for the name of places that were part of the British Empire when many of these books were published.\n\n\nSearching for the string 'India' in the titles and randomly sampling 10 titles returns:\n\n\n* “Travels in India in the Seventeenth Century: by Sir Thomas Roe and Dr. John Fryer. Reprinted from the ‘Calcutta Weekly Englishman.’”,\n* ‘A Winter in India and Malaysia among the Methodist Missions’,\n* “The Tourist’s Guide to all the principal stations on the railways of Northern India [By W. W.] ... Fifth edition”,\n* ‘Records of Sport and Military Life in Western India ... With an introduction by ... G. B. Malleson’,\n* \"Lakhmi, the Rájpút's Bride. A tale of Gujarát in Western India [A poem.]”,\n* ‘The West India Commonplace Book: compiled from parliamentary and official documents; shewing the interest of Great Britain in its Sugar Colonies’,\n* “From Tonkin to India : by the sources of the Irawadi, January’ 95-January ’96”,\n* ‘Case of the Ameers of Sinde : speeches of Mr. John Sullivan, and Captain William Eastwick, at a special court held at the India House, ... 26th January, 1844’,\n* ‘The Andaman Islands; their colonisation, etc. A correspondence addressed to the India Office’,\n* ‘Ancient India as described by Ptolemy; being a translation of the chapters which describe India and Eastern Asia in the treatise on Geography written by Klaudios Ptolemaios ... with introduction, commentary, map of India according to Ptolemy, and ... index, by J. W. McCrindle’]\n\n\nSearching form the string 'Africa' in the titles and randomly sampling 10 titles returns:\n\n\n* ['De Benguella ás Terras de Iácca. Descripção de uma viagem na Africa Central e Occidental ... Expedição organisada nos annos de 1877-1880. Edição illustrada',\n* ‘To the New Geographical Society of Edinburgh [An address on Africa by H. M. Stanley.]’,\n* ‘Diamonds and Gold in South Africa ... With maps, etc’,\n* ‘Missionary Travels and Researches in South Africa ... With notes by F. S. Arnot. With map and illustrations. New edition’,\n* ‘A Narrative of a Visit to the Mauritius and South Africa ... Illustrated by two maps, sixteen etchings and twenty-eight wood-cuts’,\n* ‘Side Lights on South Africa ... With a map, etc’,\n* ‘My Second Journey through Equatorial Africa ... in ... 1886 and 1887 ... Translated ... by M. J. A. Bergmann. With a map ... and ... illustrations, etc’,\n* ‘Missionary Travels and Researches in South Africa ... With portrait and fullpage illustrations’,\n* ‘[African sketches.] Narrative of a residence in South Africa ... A new edition. To which is prefixed a biographical sketch of the author by J. Conder’,\n* ‘Lake Ngami; or, Explorations and discoveries during four years wandering in the wilds of South Western Africa ... With a map, and numerous illustrations, etc’]", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe books are licensed under the CC Public Domain Mark 1.0 license.", "### Contributions\n\n\nThanks to @davanstrien for adding this dataset." ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_categories-other #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-machine-generated #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-original #language-German #language-English #language-Spanish #language-French #language-Italian #language-Dutch #license-cc0-1.0 #digital-humanities-research #region-us \n", "### Dataset Summary\n\n\nThis dataset consists of books digitised by the British Library in partnership with Microsoft. The dataset includes ~25 million pages of out of copyright texts. The majority of the texts were published in the 18th and 19th Century, but the collection also consists of a smaller number of books from earlier periods. Items within this collection cover a wide range of subject areas, including geography, philosophy, history, poetry and literature and are published in various languages.\n\n\nWhile the books are predominately from the 18th and 19th Centuries, there are fewer books from earlier periods. The number of pages in the corpus by decade:", "### Supported Tasks and Leaderboards\n\n\nThis collection has been previously used across various digital history and humanities projects since being published.\n\n\nThe dataset consists of text and a range of metadata associated with this text. This metadata includes:\n\n\n* date of publication\n* place of publication\n* country of publication\n* language\n* OCR quality\n* physical description of the original physical item", "#### Language model training\n\n\nAs a relatively large dataset, 'blbooks' provides a source dataset for training language models. The presence of this metadata also offers interesting opportunities to use this dataset as a source for training language models based on:\n\n\n* specific time-periods\n* specific languages\n* certain OCR quality thresholds\n\n\nThe above is not an exhaustive list but offer some suggestions of how the dataset can be used to explore topics such as the impact of OCR quality on language models, the ‘transferability’ of language models across time or the impact of training multilingual language models on historical languages.", "#### Supervised tasks\n\n\nWhilst this dataset does not have annotations for a specific NLP task, such as Named Entity Recognition, it does include a wide variety of metadata. This metadata has the potential to be used for training and/or evaluating a variety of supervised tasks predicting this metadata.", "### Languages\n\n\nThis dataset consists of books published in several languages. The breakdown of the languages included (at the page level) is:\n\n\n\nThis breakdown was derived from the first language in the associated metadata field. Some books include multiple languages. Some of the languages codes for this data were also derived using computational methods. Therefore, the language fields in the dataset should be treated with some caution (discussed in more detail below).", "#### Language change\n\n\nThe publication dates of books in the data cover a broad period of time (1500-1900). For languages in the dataset with broad temporal coverage, significant language change might be found. The ability to study this change by taking reasonably large samples of languages covering different time periods is one of the opportunities offered by this dataset. The fact that the text in this dataset was produced via Optical Character Recognition (OCR) causes some challenges for this type of research (see below).", "#### Optical Character Recognition\n\n\nThe digitised books in this collection were transformed into machine-readable text using Optical Character Recognition (OCR) software. The text produced via OCR software will usually include some errors. These errors include; mistakes at the character level; for example, an 'i' is mistaken for an 'l', at the word level or across significant passages of text.\n\n\nThe books in this dataset can pose some additional challenges for OCR software. OCR errors can stem from:\n\n\n* the quality of the original printing: printing technology was a developing technology during the time period covered by this corpus; some of the original book text will include misprints, blurred or faded ink that is hard to read\n* damage to the page: some of the books will have become damaged over time, this can obscure all or parts of the text on a page\n* poor quality scans: scanning books can be challenging; for example, if the book has tight bindings, it can be hard to capture text that has fallen into the gutter of the book.\n* the language used in the books may differ from the languages OCR software is predominantly trained to recognise.", "##### OCR word confidence\n\n\nMany OCR engines produce some form of confidence score alongside the predicted text. These confidence scores are usually at the character or word level. The word confidence score was given for each word in the original ALTO XML versions of the text in this dataset in this dataset. The OCR confidence scores should be treated with some scepticism. For historical text or in a lower resource language, for example, a low confidence score may be more likely for words not included in a modern dictionary but may be accurate transcriptions of the original text. With that said, the confidence scores do give some sense of the OCR quality.\n\n\nAn example of text with a high (over 90% mean word confidence score):\n\n\nAn example of text with a score below 40%:\n\n\nThe quality of OCR - as measured by mean OCR confidence for a page - across the dataset correlates with other features. A groupby of publication decade and mean word confidence:\n\n\n\nAs might be expected, the earlier periods have lower mean word confidence scores. Again, all of this should be treated with some scepticism, especially as the size of the data grows over time.\n\n\nAs with time, the mean word confidence of the OCR software varies across languages:\n\n\n\nAgain, these numbers should be treated sceptically since some languages appear very infrequently. For example, the above table suggests the mean word confidence for Welsh is relatively high. However, there isn’t much Welsh in the dataset. Therefore, it is unlikely that this data will be particularly useful for training (historic) Welsh language models.\n\n\nDataset Structure\n-----------------\n\n\nThe dataset has a number of configurations relating to the different dates of publication in the underlying data:\n\n\n* '1500\\_1899': this configuration covers all years\n* '1800\\_1899': this configuration covers the years between 1800 and 1899\n* '1700\\_1799': this configuration covers the years between 1700 and 1799\n* '1510\\_1699': this configuration covers the years between 1510 and 1699", "### Configuration option\n\n\nAll of the configurations have an optional keyword argument 'skip\\_empty\\_pages' which is set to 'True' by default. The underlying dataset includes some pages where there is no text. This could either be because the underlying book page didn't have any text or the OCR software failed to detect this text.\n\n\nFor many uses of this dataset it doesn't make sense to include empty pages so these are skipped by default. However, for some uses you may prefer to retain a representation of the data that includes these empty pages. Passing 'skip\\_empty\\_pages=False' when loading the dataset will enable this option.", "### Data Instances\n\n\nAn example data instance:\n\n\nEach instance in the dataset represents a single page from an original digitised book.", "### Data Fields\n\n\nIncluded in this dataset are:\n\n\nField: record\\_id, Data Type: string, Description: British Library ID for the item\nField: date, Data Type: int, Description: parsed/normalised year for the item. i.e. 1850\nField: raw\\_date, Data Type: string, Description: the original raw date for an item i.e. 1850-\nField: title, Data Type: string, Description: title of the book\nField: place, Data Type: string, Description: Place of publication, i.e. London\nField: empty\\_pg, Data Type: bool, Description: whether page contains text\nField: text, Data Type: string, Description: OCR generated text for a page\nField: pg, Data Type: int, Description: page in original book the instance refers to\nField: mean\\_wc\\_ocr, Data Type: float, Description: mean word confidence values for the page\nField: std\\_wc\\_ocr, Data Type: float, Description: standard deviation of the word confidence values for the page\nField: name, Data Type: string, Description: name associated with the item (usually author)\nField: all names, Data Type: string, Description: all names associated with a publication\nField: Publisher, Data Type: string, Description: publisher of the book\nField: Country of publication 1, Data Type: string, Description: first country associated with publication\nField: all Countries of publication, Data Type: string, Description: all countries associated with a publication\nField: Physical description, Data Type: string, Description: physical description of the item (size). This requires some normalisation before use and isn’t always present\nField: Language\\_1, Data Type: string, Description: first language associated with the book, this is usually present\nField: Language\\_2, Data Type: string, Description: \nField: Language\\_3, Data Type: string, Description: \nField: Language\\_4, Data Type: string, Description: \nField: multi\\_language, Data Type: bool, Description: \n\n\nSome of these fields are not populated a large proportion of the time. You can get some sense of this from this Pandas Profiling report\n\n\nThe majority of these fields relate to metadata about the books. Most of these fields were created by staff working for the British Library. The notable exception is the “Languages” fields that have sometimes been determined using computational methods. This work is reported in more detail in Automated Language Identification of Bibliographic Resources. It is important to note that metadata is neither perfect nor static. The metadata associated with this book was generated based on export from the British Library catalogue in 2021.", "### Data Splits\n\n\nThis dataset contains a single split 'train'.\n\n\nDataset Creation\n----------------\n\n\nNote this section is a work in progress.", "### Curation Rationale\n\n\nThe books in this collection were digitised as part of a project partnership between the British Library and Microsoft. Mass digitisation, i.e. projects intending to quickly digitise large volumes of materials shape the selection of materials to include in several ways. Some considerations which are often involved in the decision of whether to include items for digitisation include (but are not limited to):\n\n\n* copyright status\n* preservation needs\n* the size of an item, very large and very small items are often hard to digitise quickly\n\n\nThese criteria can have knock-on effects on the makeup of a collection. For example, systematically excluding large books may result in some types of book content not being digitised. Large volumes are likely to be correlated to content to at least some extent, so excluding them from digitisation will mean that material is underrepresented. Similarly, copyright status is often (but not only) determined by publication date. This can often lead to a rapid fall in the number of items in a collection after a certain cut-off date.\n\n\nAll of the above is largely to make clear that this collection was not curated to create a representative sample of the British Library’s holdings. Some material will be over-represented, and others under-represented. Similarly, the collection should not be considered a representative sample of what was published across the period covered by the dataset (nor that the relative proportions of the data for each time period represent a proportional sample of publications from that period). Finally, and this probably does not need stating, the language included in the text should not be considered representative of either written or spoken language(s) from that time period.", "### Source Data\n\n\nThe source data (physical items) includes a variety of resources (predominantly monographs) held by the British Library. The British Library is a Legal Deposit library. “Legal deposit requires publishers to provide a copy of every work they publish in the UK to the British Library. It’s existed in English law since 1662.” source.\n\n\nThe source data for this version of the data is derived from the original ALTO XML files and a recent metadata export #TODO add links", "#### Initial Data Collection and Normalization\n\n\nThis version of the dataset was created using the original ALTO XML files and, where a match was found, updating the metadata associated with that item with more recent metadata using an export from the British Library catalogue. The process of creating this new dataset is documented here #TODO add link.\n\n\nThere are a few decisions made in the above processing steps worth highlighting in particular:", "##### Date normalization\n\n\nThe metadata around date of publication for an item is not always exact. It often is represented as a date range e.g. '1850-1860'. The 'date' field above takes steps to normalise this date to a single integer value. In most cases, this is taking the mean of the values associated with the item. The 'raw\\_date' field includes the unprocessed date string.", "##### Metadata included\n\n\nThe metadata associated with each item includes most of the fields available via the ALTO XML. However, the data doesn’t include some metadata fields from the metadata export file. The reason fields were excluded because they are frequently not populated. A cut off of 50% was chosen, i.e. values from the metadata which are missing above 50% of the time were not included. This is slightly arbitrary, but since the aim of this version of the data was to support computational research using the collection it was felt that these fields with frequent missing values would be less valuable.", "#### Who are the source language producers?", "### Annotations\n\n\nThis dataset does not include annotations as usually understood in the context of NLP. The data does include metadata associated with the books.", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThere a range of considerations around using the data. These include the representativeness of the dataset, the OCR quality and the language used. Depending on your use case, these may be more or less important. For example, the impact of OCR quality on downstream tasks will depend on the target task. It may also be possible to mitigate this negative impact from OCR through tokenizer choice, Language Model training objectives, oversampling high-quality OCR, etc.", "### Social Impact of Dataset", "### Discussion of Biases\n\n\nThe text in this collection is derived from historical text. As a result, the text will reflect this time period's social beliefs and attitudes. The books include both fiction and non-fiction books.\n\n\nExamples of book titles that appear in the data (these are randomly sampled from all titles):\n\n\n* ‘Rhymes and Dreams, Legends of Pendle Forest, and other poems’,\n* “Précis of Information concerning the Zulu Country, with a map. Prepared in the Intelligence Branch of the Quarter-Master-General’s Department, Horse Guards, War Office, etc”,\n* ‘The fan. A poem’,\n* ‘Grif; a story of Australian Life’,\n* ‘Calypso; a masque: in three acts, etc’,\n* ‘Tales Uncle told [With illustrative woodcuts.]’,\n* 'Questings',\n* 'Home Life on an Ostrich Farm. With ... illustrations’,\n* ‘Bulgarya i Bulgarowie’,\n* 'Εἰς τα βαθη της Ἀφρικης [In darkest Africa.] ... Μεταφρασις Γεωρ. Σ. Βουτσινα, etc',\n* ‘The Corsair, a tale’,\n‘Poems ... With notes [With a portrait.]’,\n* ‘Report of the Librarian for the year 1898 (1899, 1901, 1909)’,\n* “The World of Thought. A novel. By the author of ‘Before I began to speak.’”,\n* 'Amleto; tragedia ... recata in versi italiani da M. Leoni, etc']\n\n\nWhile using titles alone is insufficient to integrate bias in this collection, it gives some insight into the topics covered by books. Further, the tiles highlight some particular types of bias we might find in the collection. This should in no way be considered an exhaustive list.", "#### Colonialism\n\n\nEven in the above random sample of titles examples of colonial attitudes, we can see examples of titles. We can try and interrogate this further by searching for the name of places that were part of the British Empire when many of these books were published.\n\n\nSearching for the string 'India' in the titles and randomly sampling 10 titles returns:\n\n\n* “Travels in India in the Seventeenth Century: by Sir Thomas Roe and Dr. John Fryer. Reprinted from the ‘Calcutta Weekly Englishman.’”,\n* ‘A Winter in India and Malaysia among the Methodist Missions’,\n* “The Tourist’s Guide to all the principal stations on the railways of Northern India [By W. W.] ... Fifth edition”,\n* ‘Records of Sport and Military Life in Western India ... With an introduction by ... G. B. Malleson’,\n* \"Lakhmi, the Rájpút's Bride. A tale of Gujarát in Western India [A poem.]”,\n* ‘The West India Commonplace Book: compiled from parliamentary and official documents; shewing the interest of Great Britain in its Sugar Colonies’,\n* “From Tonkin to India : by the sources of the Irawadi, January’ 95-January ’96”,\n* ‘Case of the Ameers of Sinde : speeches of Mr. John Sullivan, and Captain William Eastwick, at a special court held at the India House, ... 26th January, 1844’,\n* ‘The Andaman Islands; their colonisation, etc. A correspondence addressed to the India Office’,\n* ‘Ancient India as described by Ptolemy; being a translation of the chapters which describe India and Eastern Asia in the treatise on Geography written by Klaudios Ptolemaios ... with introduction, commentary, map of India according to Ptolemy, and ... index, by J. W. McCrindle’]\n\n\nSearching form the string 'Africa' in the titles and randomly sampling 10 titles returns:\n\n\n* ['De Benguella ás Terras de Iácca. Descripção de uma viagem na Africa Central e Occidental ... Expedição organisada nos annos de 1877-1880. Edição illustrada',\n* ‘To the New Geographical Society of Edinburgh [An address on Africa by H. M. Stanley.]’,\n* ‘Diamonds and Gold in South Africa ... With maps, etc’,\n* ‘Missionary Travels and Researches in South Africa ... With notes by F. S. Arnot. With map and illustrations. New edition’,\n* ‘A Narrative of a Visit to the Mauritius and South Africa ... Illustrated by two maps, sixteen etchings and twenty-eight wood-cuts’,\n* ‘Side Lights on South Africa ... With a map, etc’,\n* ‘My Second Journey through Equatorial Africa ... in ... 1886 and 1887 ... Translated ... by M. J. A. Bergmann. With a map ... and ... illustrations, etc’,\n* ‘Missionary Travels and Researches in South Africa ... With portrait and fullpage illustrations’,\n* ‘[African sketches.] Narrative of a residence in South Africa ... A new edition. To which is prefixed a biographical sketch of the author by J. Conder’,\n* ‘Lake Ngami; or, Explorations and discoveries during four years wandering in the wilds of South Western Africa ... With a map, and numerous illustrations, etc’]", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe books are licensed under the CC Public Domain Mark 1.0 license.", "### Contributions\n\n\nThanks to @davanstrien for adding this dataset." ]
[ 157, 146, 77, 132, 76, 102, 111, 269, 456, 161, 29, 594, 33, 365, 111, 92, 93, 132, 10, 35, 5, 9, 124, 7, 431, 801, 14, 6, 20, 18 ]
[ "passage: TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_categories-other #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-machine-generated #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-original #language-German #language-English #language-Spanish #language-French #language-Italian #language-Dutch #license-cc0-1.0 #digital-humanities-research #region-us \n### Dataset Summary\n\n\nThis dataset consists of books digitised by the British Library in partnership with Microsoft. The dataset includes ~25 million pages of out of copyright texts. The majority of the texts were published in the 18th and 19th Century, but the collection also consists of a smaller number of books from earlier periods. Items within this collection cover a wide range of subject areas, including geography, philosophy, history, poetry and literature and are published in various languages.\n\n\nWhile the books are predominately from the 18th and 19th Centuries, there are fewer books from earlier periods. The number of pages in the corpus by decade:### Supported Tasks and Leaderboards\n\n\nThis collection has been previously used across various digital history and humanities projects since being published.\n\n\nThe dataset consists of text and a range of metadata associated with this text. This metadata includes:\n\n\n* date of publication\n* place of publication\n* country of publication\n* language\n* OCR quality\n* physical description of the original physical item", "passage: #### Language model training\n\n\nAs a relatively large dataset, 'blbooks' provides a source dataset for training language models. The presence of this metadata also offers interesting opportunities to use this dataset as a source for training language models based on:\n\n\n* specific time-periods\n* specific languages\n* certain OCR quality thresholds\n\n\nThe above is not an exhaustive list but offer some suggestions of how the dataset can be used to explore topics such as the impact of OCR quality on language models, the ‘transferability’ of language models across time or the impact of training multilingual language models on historical languages.#### Supervised tasks\n\n\nWhilst this dataset does not have annotations for a specific NLP task, such as Named Entity Recognition, it does include a wide variety of metadata. This metadata has the potential to be used for training and/or evaluating a variety of supervised tasks predicting this metadata.### Languages\n\n\nThis dataset consists of books published in several languages. The breakdown of the languages included (at the page level) is:\n\n\n\nThis breakdown was derived from the first language in the associated metadata field. Some books include multiple languages. Some of the languages codes for this data were also derived using computational methods. Therefore, the language fields in the dataset should be treated with some caution (discussed in more detail below).#### Language change\n\n\nThe publication dates of books in the data cover a broad period of time (1500-1900). For languages in the dataset with broad temporal coverage, significant language change might be found. The ability to study this change by taking reasonably large samples of languages covering different time periods is one of the opportunities offered by this dataset. The fact that the text in this dataset was produced via Optical Character Recognition (OCR) causes some challenges for this type of research (see below).", "passage: #### Optical Character Recognition\n\n\nThe digitised books in this collection were transformed into machine-readable text using Optical Character Recognition (OCR) software. The text produced via OCR software will usually include some errors. These errors include; mistakes at the character level; for example, an 'i' is mistaken for an 'l', at the word level or across significant passages of text.\n\n\nThe books in this dataset can pose some additional challenges for OCR software. OCR errors can stem from:\n\n\n* the quality of the original printing: printing technology was a developing technology during the time period covered by this corpus; some of the original book text will include misprints, blurred or faded ink that is hard to read\n* damage to the page: some of the books will have become damaged over time, this can obscure all or parts of the text on a page\n* poor quality scans: scanning books can be challenging; for example, if the book has tight bindings, it can be hard to capture text that has fallen into the gutter of the book.\n* the language used in the books may differ from the languages OCR software is predominantly trained to recognise.##### OCR word confidence\n\n\nMany OCR engines produce some form of confidence score alongside the predicted text. These confidence scores are usually at the character or word level. The word confidence score was given for each word in the original ALTO XML versions of the text in this dataset in this dataset. The OCR confidence scores should be treated with some scepticism. For historical text or in a lower resource language, for example, a low confidence score may be more likely for words not included in a modern dictionary but may be accurate transcriptions of the original text. With that said, the confidence scores do give some sense of the OCR quality.\n\n\nAn example of text with a high (over 90% mean word confidence score):\n\n\nAn example of text with a score below 40%:\n\n\nThe quality of OCR - as measured by mean OCR confidence for a page - across the dataset correlates with other features. A groupby of publication decade and mean word confidence:\n\n\n\nAs might be expected, the earlier periods have lower mean word confidence scores. Again, all of this should be treated with some scepticism, especially as the size of the data grows over time.\n\n\nAs with time, the mean word confidence of the OCR software varies across languages:\n\n\n\nAgain, these numbers should be treated sceptically since some languages appear very infrequently. For example, the above table suggests the mean word confidence for Welsh is relatively high. However, there isn’t much Welsh in the dataset. Therefore, it is unlikely that this data will be particularly useful for training (historic) Welsh language models.\n\n\nDataset Structure\n-----------------\n\n\nThe dataset has a number of configurations relating to the different dates of publication in the underlying data:\n\n\n* '1500\\_1899': this configuration covers all years\n* '1800\\_1899': this configuration covers the years between 1800 and 1899\n* '1700\\_1799': this configuration covers the years between 1700 and 1799\n* '1510\\_1699': this configuration covers the years between 1510 and 1699", "passage: ### Configuration option\n\n\nAll of the configurations have an optional keyword argument 'skip\\_empty\\_pages' which is set to 'True' by default. The underlying dataset includes some pages where there is no text. This could either be because the underlying book page didn't have any text or the OCR software failed to detect this text.\n\n\nFor many uses of this dataset it doesn't make sense to include empty pages so these are skipped by default. However, for some uses you may prefer to retain a representation of the data that includes these empty pages. Passing 'skip\\_empty\\_pages=False' when loading the dataset will enable this option.### Data Instances\n\n\nAn example data instance:\n\n\nEach instance in the dataset represents a single page from an original digitised book.", "passage: ### Data Fields\n\n\nIncluded in this dataset are:\n\n\nField: record\\_id, Data Type: string, Description: British Library ID for the item\nField: date, Data Type: int, Description: parsed/normalised year for the item. i.e. 1850\nField: raw\\_date, Data Type: string, Description: the original raw date for an item i.e. 1850-\nField: title, Data Type: string, Description: title of the book\nField: place, Data Type: string, Description: Place of publication, i.e. London\nField: empty\\_pg, Data Type: bool, Description: whether page contains text\nField: text, Data Type: string, Description: OCR generated text for a page\nField: pg, Data Type: int, Description: page in original book the instance refers to\nField: mean\\_wc\\_ocr, Data Type: float, Description: mean word confidence values for the page\nField: std\\_wc\\_ocr, Data Type: float, Description: standard deviation of the word confidence values for the page\nField: name, Data Type: string, Description: name associated with the item (usually author)\nField: all names, Data Type: string, Description: all names associated with a publication\nField: Publisher, Data Type: string, Description: publisher of the book\nField: Country of publication 1, Data Type: string, Description: first country associated with publication\nField: all Countries of publication, Data Type: string, Description: all countries associated with a publication\nField: Physical description, Data Type: string, Description: physical description of the item (size). This requires some normalisation before use and isn’t always present\nField: Language\\_1, Data Type: string, Description: first language associated with the book, this is usually present\nField: Language\\_2, Data Type: string, Description: \nField: Language\\_3, Data Type: string, Description: \nField: Language\\_4, Data Type: string, Description: \nField: multi\\_language, Data Type: bool, Description: \n\n\nSome of these fields are not populated a large proportion of the time. You can get some sense of this from this Pandas Profiling report\n\n\nThe majority of these fields relate to metadata about the books. Most of these fields were created by staff working for the British Library. The notable exception is the “Languages” fields that have sometimes been determined using computational methods. This work is reported in more detail in Automated Language Identification of Bibliographic Resources. It is important to note that metadata is neither perfect nor static. The metadata associated with this book was generated based on export from the British Library catalogue in 2021.### Data Splits\n\n\nThis dataset contains a single split 'train'.\n\n\nDataset Creation\n----------------\n\n\nNote this section is a work in progress.### Curation Rationale\n\n\nThe books in this collection were digitised as part of a project partnership between the British Library and Microsoft. Mass digitisation, i.e. projects intending to quickly digitise large volumes of materials shape the selection of materials to include in several ways. Some considerations which are often involved in the decision of whether to include items for digitisation include (but are not limited to):\n\n\n* copyright status\n* preservation needs\n* the size of an item, very large and very small items are often hard to digitise quickly\n\n\nThese criteria can have knock-on effects on the makeup of a collection. For example, systematically excluding large books may result in some types of book content not being digitised. Large volumes are likely to be correlated to content to at least some extent, so excluding them from digitisation will mean that material is underrepresented. Similarly, copyright status is often (but not only) determined by publication date. This can often lead to a rapid fall in the number of items in a collection after a certain cut-off date.\n\n\nAll of the above is largely to make clear that this collection was not curated to create a representative sample of the British Library’s holdings. Some material will be over-represented, and others under-represented. Similarly, the collection should not be considered a representative sample of what was published across the period covered by the dataset (nor that the relative proportions of the data for each time period represent a proportional sample of publications from that period). Finally, and this probably does not need stating, the language included in the text should not be considered representative of either written or spoken language(s) from that time period.", "passage: ### Source Data\n\n\nThe source data (physical items) includes a variety of resources (predominantly monographs) held by the British Library. The British Library is a Legal Deposit library. “Legal deposit requires publishers to provide a copy of every work they publish in the UK to the British Library. It’s existed in English law since 1662.” source.\n\n\nThe source data for this version of the data is derived from the original ALTO XML files and a recent metadata export #TODO add links#### Initial Data Collection and Normalization\n\n\nThis version of the dataset was created using the original ALTO XML files and, where a match was found, updating the metadata associated with that item with more recent metadata using an export from the British Library catalogue. The process of creating this new dataset is documented here #TODO add link.\n\n\nThere are a few decisions made in the above processing steps worth highlighting in particular:##### Date normalization\n\n\nThe metadata around date of publication for an item is not always exact. It often is represented as a date range e.g. '1850-1860'. The 'date' field above takes steps to normalise this date to a single integer value. In most cases, this is taking the mean of the values associated with the item. The 'raw\\_date' field includes the unprocessed date string.##### Metadata included\n\n\nThe metadata associated with each item includes most of the fields available via the ALTO XML. However, the data doesn’t include some metadata fields from the metadata export file. The reason fields were excluded because they are frequently not populated. A cut off of 50% was chosen, i.e. values from the metadata which are missing above 50% of the time were not included. This is slightly arbitrary, but since the aim of this version of the data was to support computational research using the collection it was felt that these fields with frequent missing values would be less valuable.#### Who are the source language producers?### Annotations\n\n\nThis dataset does not include annotations as usually understood in the context of NLP. The data does include metadata associated with the books.#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThere a range of considerations around using the data. These include the representativeness of the dataset, the OCR quality and the language used. Depending on your use case, these may be more or less important. For example, the impact of OCR quality on downstream tasks will depend on the target task. It may also be possible to mitigate this negative impact from OCR through tokenizer choice, Language Model training objectives, oversampling high-quality OCR, etc.### Social Impact of Dataset", "passage: ### Discussion of Biases\n\n\nThe text in this collection is derived from historical text. As a result, the text will reflect this time period's social beliefs and attitudes. The books include both fiction and non-fiction books.\n\n\nExamples of book titles that appear in the data (these are randomly sampled from all titles):\n\n\n* ‘Rhymes and Dreams, Legends of Pendle Forest, and other poems’,\n* “Précis of Information concerning the Zulu Country, with a map. Prepared in the Intelligence Branch of the Quarter-Master-General’s Department, Horse Guards, War Office, etc”,\n* ‘The fan. A poem’,\n* ‘Grif; a story of Australian Life’,\n* ‘Calypso; a masque: in three acts, etc’,\n* ‘Tales Uncle told [With illustrative woodcuts.]’,\n* 'Questings',\n* 'Home Life on an Ostrich Farm. With ... illustrations’,\n* ‘Bulgarya i Bulgarowie’,\n* 'Εἰς τα βαθη της Ἀφρικης [In darkest Africa.] ... Μεταφρασις Γεωρ. Σ. Βουτσινα, etc',\n* ‘The Corsair, a tale’,\n‘Poems ... With notes [With a portrait.]’,\n* ‘Report of the Librarian for the year 1898 (1899, 1901, 1909)’,\n* “The World of Thought. A novel. By the author of ‘Before I began to speak.’”,\n* 'Amleto; tragedia ... recata in versi italiani da M. Leoni, etc']\n\n\nWhile using titles alone is insufficient to integrate bias in this collection, it gives some insight into the topics covered by books. Further, the tiles highlight some particular types of bias we might find in the collection. This should in no way be considered an exhaustive list." ]
de087348b4ef8c44c2978f8ff819e9e3862089e6
# Dataset Card for blbooksgenre ## Table of Contents - [Dataset Card for blbooksgenre](#dataset-card-for-blbooksgenre) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Supervised tasks](#supervised-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Colonialism](#colonialism) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:**: [https://doi.org/10.23636/BKHQ-0312](https://doi.org/10.23636/BKHQ-0312) - **Repository:** [https://doi.org/10.23636/BKHQ-0312](https://doi.org/10.23636/BKHQ-0312) - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset consists of metadata relating to books [digitised by the British Library in partnership with Microsoft](https://www.bl.uk/collection-guides/google-books-digitised-printed-heritage). Some of this metadata was exported from the British Library catalogue whilst others was generated as part of a crowdsourcing project. The text of this book and other metadata can be found on the [date.bl](https://data.bl.uk/bl_labs_datasets/#3) website. The majority of the books in this collection were published in the 18th and 19th Century but the collection also includes a smaller number of books from earlier periods. Items within this collection cover a wide range of subject areas including geography, philosophy, history, poetry and literature and are published in a variety of languages. For the subsection of the data which contains additional crowsourced annotations the date of publication breakdown is as follows: | | Date of publication | | ---- | ------------------- | | 1630 | 8 | | 1690 | 4 | | 1760 | 10 | | 1770 | 5 | | 1780 | 5 | | 1790 | 18 | | 1800 | 45 | | 1810 | 96 | | 1820 | 152 | | 1830 | 182 | | 1840 | 259 | | 1850 | 400 | | 1860 | 377 | | 1870 | 548 | | 1880 | 776 | | 1890 | 1484 | | 1900 | 17 | | 1910 | 1 | | 1970 | 1 | [More Information Needed] ### Supported Tasks and Leaderboards The digitised books collection which this dataset describes has been used in a variety of digital history and humanities projects since being published. This dataset is suitable for a variety of unsupervised tasks and for a 'genre classification task'. #### Supervised tasks The main possible use case for this dataset is to develop and evaluate 'genre classification' models. The dataset includes human generated labels for whether a book is 'fiction' or 'non-fiction'. This has been used to train models for genre classifcation which predict whether a book is 'fiction' or 'non-fiction' based on its title. ### Languages [More Information Needed] ## Dataset Structure The dataset currently has three configurations intended to support a range of tasks for which this dataset could be used for: - `title_genre_classifiction` : this creates a de-duplicated version of the dataset with the `BL record`, `title` and `label`. - `annotated_raw`: This version of the dataset includes all fields from the original dataset which are annotated. This includes duplication from different annotators" - `raw`: This version of the dataset includes all the data from the original data including data without annotations. ### Data Instances An example data instance from the `title_genre_classifiction` config: ```python {'BL record ID': '014603046', 'title': 'The Canadian farmer. A missionary incident [Signed: W. J. H. Y, i.e. William J. H. Yates.]', 'label': 0} ``` An example data instance from the `annotated_raw` config: ```python {'BL record ID': '014603046', 'Name': 'Yates, William Joseph H.', 'Dates associated with name': '', 'Type of name': 'person', 'Role': '', 'All names': ['Yates, William Joseph H. [person] ', ' Y, W. J. H. [person]'], 'Title': 'The Canadian farmer. A missionary incident [Signed: W. J. H. Y, i.e. William J. H. Yates.]', 'Variant titles': '', 'Series title': '', 'Number within series': '', 'Country of publication': ['England'], 'Place of publication': ['London'], 'Publisher': '', 'Date of publication': '1879', 'Edition': '', 'Physical description': 'pages not numbered, 21 cm', 'Dewey classification': '', 'BL shelfmark': 'Digital Store 11601.f.36. (1.)', 'Topics': '', 'Genre': '', 'Languages': ['English'], 'Notes': 'In verse', 'BL record ID for physical resource': '004079262', 'classification_id': '267476823.0', 'user_id': '15.0', 'subject_ids': '44369003.0', 'annotator_date_pub': '1879', 'annotator_normalised_date_pub': '1879', 'annotator_edition_statement': 'NONE', 'annotator_FAST_genre_terms': '655 7 ‡aPoetry‡2fast‡0(OCoLC)fst01423828', 'annotator_FAST_subject_terms': '60007 ‡aAlice,‡cGrand Duchess, consort of Ludwig IV, Grand Duke of Hesse-Darmstadt,‡d1843-1878‡2fast‡0(OCoLC)fst00093827', 'annotator_comments': '', 'annotator_main_language': '', 'annotator_other_languages_summaries': 'No', 'annotator_summaries_language': '', 'annotator_translation': 'No', 'annotator_original_language': '', 'annotator_publisher': 'NONE', 'annotator_place_pub': 'London', 'annotator_country': 'enk', 'annotator_title': 'The Canadian farmer. A missionary incident [Signed: W. J. H. Y, i.e. William J. H. Yates.]', 'Link to digitised book': 'http://access.bl.uk/item/viewer/ark:/81055/vdc_00000002842E', 'annotated': True, 'Type of resource': 0, 'created_at': datetime.datetime(2020, 8, 11, 14, 30, 33), 'annotator_genre': 0} ``` ### Data Fields The data fields differ slightly between configs. All possible fields for the `annotated_raw` config are listed below. For the `raw` version of the dataset datatypes are usually string to avoid errors when processing missing values. - `BL record ID`: an internal ID used by the British Library, this can be useful for linking this data to other BL collections. - `Name`: name associated with the item (usually author) - `Dates associated with name`: dates associated with above e.g. DOB - `Type of name`: whether `Name` is a person or an organization etc. - `Role`: i.e. whether `Name` is `author`, `publisher` etc. - `All names`: a fuller list of names associated with the item. - `Title`: The title of the work - `Variant titles` - `Series title` - `Number within series` - `Country of publication`: encoded as a list of countries listed in the metadata - `Place of publication`: encoded as a list of places listed in the metadata - `Publisher` - `Date of publication`: this is encoded as a string since this field can include data ranges i.e.`1850-1855`. - `Edition` - `Physical description`: encoded as a string since the format of this field varies - `Dewey classification` - `BL shelfmark`: a British Library shelf mark - `Topics`: topics included in the catalogue record - `Genre` the genre information included in the original catalogue record note that this is often missing - `Languages`; encoded as a list of languages - `Notes`: notes from the catalogue record - `BL record ID for physical resource` The following fields are all generated via the crowdsourcing task (discussed in more detail below) - `classification_id`: ID for the classification in the annotation task - `user_id` ID for the annotator - `subject_ids`: internal annotation task ID - `annotator_date_pub`: an updated publication data - `annotator_normalised_date_pub`: normalized version of the above - `annotator_edition_statement` updated edition - `annotator_FAST_genre_terms`: [FAST classification genre terms](https://www.oclc.org/research/areas/data-science/fast.html) - `annotator_FAST_subject_terms`: [FAST subject terms](https://www.oclc.org/research/areas/data-science/fast.html) - `annotator_comments`: free form comments - `annotator_main_language` - `annotator_other_languages_summaries` - `'annotator_summaries_language` - `annotator_translation` - `annotator_original_language` - `annotator_publisher` - `annotator_place_pub` - `annotator_country` - `annotator_title` - `Link to digitised book` - `annotated`: `bool` flag to indicate if row has annotations or not - `created_at`: when the annotation was created - `annotator_genre`: the updated annotation for the `genre` of the book. Finally the `label` field of the `title_genre_classifiction` configuration is a class label with values 0 (Fiction) or 1 (Non-fiction). [More Information Needed] ### Data Splits This dataset contains a single split `train`. ## Dataset Creation **Note** this section is a work in progress. ### Curation Rationale The books in this collection were digitised as part of a project partnership between the British Library and Microsoft. [Mass digitisation](https://en.wikipedia.org/wiki/Category:Mass_digitization) i.e. projects where there is a goal to quickly digitise large volumes of materials shape the selection of materials to include in a number of ways. Some consideratoins which are often involved in the decision of whether to include items for digitization include (but are not limited to): - copyright status - preservation needs- the size of an item, very large and very small items are often hard to digitize quickly These criteria can have knock-on effects on the makeup of a collection. For example systematically excluding large books may result in some types of book content not being digitized. Large volumes are likely to be correlated to content to at least some extent so excluding them from digitization will mean that material is under represented. Similarly copyright status is often (but not only) determined by publication data. This can often lead to a rapid fall in the number of items in a collection after a certain cut-off date. All of the above is largely to make clear that this collection was not curated with the aim of creating a representative sample of the British Library's holdings. Some material will be over-represented and other under-represented. Similarly, the collection should not be considered a representative sample of what was published across the time period covered by the dataset (nor that that the relative proportions of the data for each time period represent a proportional sample of publications from that period). [More Information Needed] ### Source Data The original source data (physical items) includes a variety of resources (predominantly monographs) held by the [British Library](bl.uk/](https://bl.uk/). The British Library is a [Legal Deposit](https://www.bl.uk/legal-deposit/about-legal-deposit) library. "Legal deposit requires publishers to provide a copy of every work they publish in the UK to the British Library. It's existed in English law since 1662."[source](https://www.bl.uk/legal-deposit/about-legal-deposit). [More Information Needed] #### Initial Data Collection and Normalization This version of the dataset was created partially from data exported from British Library catalogue records and partially via data generated from a crowdsourcing task involving British Library staff. #### Who are the source language producers? [More Information Needed] ### Annotations The data does includes metadata associated with the books these are produced by British Library staff. The additional annotations were carried out during 2020 as part of an internal crowdsourcing task. #### Annotation process New annotations were produced via a crowdsourcing tasks. Annotators have the option to pick titles from a particular language subset from the broader digitized 19th century books collection. As a result the annotations are not random and overrepresent some languages. [More Information Needed] #### Who are the annotators? Staff working at the British Library. Most of these staff work with metadata as part of their jobs and so could be considered expert annotators. [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data There a range of considerations around using the data. These include the representativeness of the dataset, the bias towards particular languages etc. It is also important to note that library metadata is not static. The metadata held in library catalogues is updated and changed over time for a variety of reasons. The way in which different institutions catalogue items also varies. As a result it is important to evaluate the performance of any models trained on this data before applying to a new collection. [More Information Needed] ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases The text in this collection is derived from historic text. As a result the text will reflect to social beliefs and attitudes of this time period. The titles of the book give some sense of their content. Examples of book titles which appear in the data (these are randomly sampled from all titles): - 'Rhymes and Dreams, Legends of Pendle Forest, and other poems', - "Précis of Information concerning the Zulu Country, with a map. Prepared in the Intelligence Branch of the Quarter-Master-General's Department, Horse Guards, War Office, etc", - 'The fan. A poem', - 'Grif; a story of Australian Life', - 'Calypso; a masque: in three acts, etc', - 'Tales Uncle told [With illustrative woodcuts.]', - 'Questings', - 'Home Life on an Ostrich Farm. With ... illustrations', - 'Bulgarya i Bulgarowie', - 'Εἰς τα βαθη της Ἀφρικης [In darkest Africa.] ... Μεταφρασις Γεωρ. Σ. Βουτσινα, etc', - 'The Corsair, a tale', 'Poems ... With notes [With a portrait.]', - 'Report of the Librarian for the year 1898 (1899, 1901, 1909)', - "The World of Thought. A novel. By the author of 'Before I began to speak.'", - 'Amleto; tragedia ... recata in versi italiani da M. Leoni, etc'] Whilst using titles alone, is obviously insufficient to integrate bias in this collection it gives some insight into the topics covered by books in the corpus. Further looking into the tiles highlight some particular types of bias we might find in the collection. This should in no way be considered an exhaustive list. #### Colonialism We can see even in the above random sample of titles examples of colonial attitudes. We can try and interrogate this further by searching for the name of countries which were part of the British Empire at the time many of these books were published. Searching for the string `India` in the titles and randomly sampling 10 titles returns: - "Travels in India in the Seventeenth Century: by Sir Thomas Roe and Dr. John Fryer. Reprinted from the 'Calcutta Weekly Englishman.'", - 'A Winter in India and Malaysia among the Methodist Missions', - "The Tourist's Guide to all the principal stations on the railways of Northern India [By W. W.] ... Fifth edition", - 'Records of Sport and Military Life in Western India ... With an introduction by ... G. B. Malleson', - "Lakhmi, the Rájpút's Bride. A tale of Gujarát in Western India [A poem.]", - 'The West India Commonplace Book: compiled from parliamentary and official documents; shewing the interest of Great Britain in its Sugar Colonies', - "From Tonkin to India : by the sources of the Irawadi, January '95-January '96", - 'Case of the Ameers of Sinde : speeches of Mr. John Sullivan, and Captain William Eastwick, at a special court held at the India House, ... 26th January, 1844', - 'The Andaman Islands; their colonization, etc. A correspondence addressed to the India Office', - 'Ancient India as described by Ptolemy; being a translation of the chapters which describe India and Eastern Asia in the treatise on Geography written by Klaudios Ptolemaios ... with introduction, commentary, map of India according to Ptolemy, and ... index, by J. W. McCrindle'] Searching form the string `Africa` in the titles and randomly sampling 10 titles returns: - ['De Benguella ás Terras de Iácca. Descripção de uma viagem na Africa Central e Occidental ... Expedição organisada nos annos de 1877-1880. Edição illustrada', - 'To the New Geographical Society of Edinburgh [An address on Africa by H. M. Stanley.]', - 'Diamonds and Gold in South Africa ... With maps, etc', - 'Missionary Travels and Researches in South Africa ... With notes by F. S. Arnot. With map and illustrations. New edition', - 'A Narrative of a Visit to the Mauritius and South Africa ... Illustrated by two maps, sixteen etchings and twenty-eight wood-cuts', - 'Side Lights on South Africa ... With a map, etc', - 'My Second Journey through Equatorial Africa ... in ... 1886 and 1887 ... Translated ... by M. J. A. Bergmann. With a map ... and ... illustrations, etc', - 'Missionary Travels and Researches in South Africa ... With portrait and fullpage illustrations', - '[African sketches.] Narrative of a residence in South Africa ... A new edition. To which is prefixed a biographical sketch of the author by J. Conder', - 'Lake Ngami; or, Explorations and discoveries during four years wandering in the wilds of South Western Africa ... With a map, and numerous illustrations, etc'] Whilst this dataset doesn't include the underlying text it is important to consider the potential attitudes represented in the title of the books, or the full text if you are using this dataset in conjunction with the full text. [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information The books are licensed under the [CC Public Domain Mark 1.0](https://creativecommons.org/publicdomain/mark/1.0/) license. ### Citation Information ```bibtex @misc{british library_genre, title={ 19th Century Books - metadata with additional crowdsourced annotations}, url={https://doi.org/10.23636/BKHQ-0312}, author={{British Library} and Morris, Victoria and van Strien, Daniel and Tolfo, Giorgia and Afric, Lora and Robertson, Stewart and Tiney, Patricia and Dogterom, Annelies and Wollner, Ildi}, year={2021}} ``` ### Contributions Thanks to [@davanstrien](https://github.com/davanstrien) for adding this dataset.
TheBritishLibrary/blbooksgenre
[ "task_categories:text-classification", "task_categories:text-generation", "task_categories:fill-mask", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:expert-generated", "language_creators:crowdsourced", "language_creators:expert-generated", "multilinguality:multilingual", "size_categories:10K<n<100K", "size_categories:1K<n<10K", "source_datasets:original", "language:de", "language:en", "language:fr", "language:nl", "license:cc0-1.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["de", "en", "fr", "nl"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K", "1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification", "text-generation", "fill-mask"], "task_ids": ["topic-classification", "multi-label-classification", "language-modeling", "masked-language-modeling"], "pretty_name": "British Library Books Genre", "config_names": ["annotated_raw", "raw", "title_genre_classifiction"], "dataset_info": [{"config_name": "title_genre_classifiction", "features": [{"name": "BL record ID", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Fiction", "1": "Non-fiction"}}}}], "splits": [{"name": "train", "num_bytes": 187600, "num_examples": 1736}], "download_size": 20111420, "dataset_size": 187600}, {"config_name": "annotated_raw", "features": [{"name": "BL record ID", "dtype": "string"}, {"name": "Name", "dtype": "string"}, {"name": "Dates associated with name", "dtype": "string"}, {"name": "Type of name", "dtype": "string"}, {"name": "Role", "dtype": "string"}, {"name": "All names", "sequence": "string"}, {"name": "Title", "dtype": "string"}, {"name": "Variant titles", "dtype": "string"}, {"name": "Series title", "dtype": "string"}, {"name": "Number within series", "dtype": "string"}, {"name": "Country of publication", "sequence": "string"}, {"name": "Place of publication", "sequence": "string"}, {"name": "Publisher", "dtype": "string"}, {"name": "Date of publication", "dtype": "string"}, {"name": "Edition", "dtype": "string"}, {"name": "Physical description", "dtype": "string"}, {"name": "Dewey classification", "dtype": "string"}, {"name": "BL shelfmark", "dtype": "string"}, {"name": "Topics", "dtype": "string"}, {"name": "Genre", "dtype": "string"}, {"name": "Languages", "sequence": "string"}, {"name": "Notes", "dtype": "string"}, {"name": "BL record ID for physical resource", "dtype": "string"}, {"name": "classification_id", "dtype": "string"}, {"name": "user_id", "dtype": "string"}, {"name": "subject_ids", "dtype": "string"}, {"name": "annotator_date_pub", "dtype": "string"}, {"name": "annotator_normalised_date_pub", "dtype": "string"}, {"name": "annotator_edition_statement", "dtype": "string"}, {"name": "annotator_FAST_genre_terms", "dtype": "string"}, {"name": "annotator_FAST_subject_terms", "dtype": "string"}, {"name": "annotator_comments", "dtype": "string"}, {"name": "annotator_main_language", "dtype": "string"}, {"name": "annotator_other_languages_summaries", "dtype": "string"}, {"name": "annotator_summaries_language", "dtype": "string"}, {"name": "annotator_translation", "dtype": "string"}, {"name": "annotator_original_language", "dtype": "string"}, {"name": "annotator_publisher", "dtype": "string"}, {"name": "annotator_place_pub", "dtype": "string"}, {"name": "annotator_country", "dtype": "string"}, {"name": "annotator_title", "dtype": "string"}, {"name": "Link to digitised book", "dtype": "string"}, {"name": "annotated", "dtype": "bool"}, {"name": "Type of resource", "dtype": {"class_label": {"names": {"0": "Monograph", "1": "Serial"}}}}, {"name": "created_at", "dtype": "timestamp[s]"}, {"name": "annotator_genre", "dtype": {"class_label": {"names": {"0": "Fiction", "1": "Can't tell", "2": "Non-fiction", "3": "The book contains both Fiction and Non-Fiction"}}}}], "splits": [{"name": "train", "num_bytes": 3583138, "num_examples": 4398}], "download_size": 20111420, "dataset_size": 3583138}, {"config_name": "raw", "features": [{"name": "BL record ID", "dtype": "string"}, {"name": "Name", "dtype": "string"}, {"name": "Dates associated with name", "dtype": "string"}, {"name": "Type of name", "dtype": "string"}, {"name": "Role", "dtype": "string"}, {"name": "All names", "sequence": "string"}, {"name": "Title", "dtype": "string"}, {"name": "Variant titles", "dtype": "string"}, {"name": "Series title", "dtype": "string"}, {"name": "Number within series", "dtype": "string"}, {"name": "Country of publication", "sequence": "string"}, {"name": "Place of publication", "sequence": "string"}, {"name": "Publisher", "dtype": "string"}, {"name": "Date of publication", "dtype": "string"}, {"name": "Edition", "dtype": "string"}, {"name": "Physical description", "dtype": "string"}, {"name": "Dewey classification", "dtype": "string"}, {"name": "BL shelfmark", "dtype": "string"}, {"name": "Topics", "dtype": "string"}, {"name": "Genre", "dtype": "string"}, {"name": "Languages", "sequence": "string"}, {"name": "Notes", "dtype": "string"}, {"name": "BL record ID for physical resource", "dtype": "string"}, {"name": "classification_id", "dtype": "string"}, {"name": "user_id", "dtype": "string"}, {"name": "subject_ids", "dtype": "string"}, {"name": "annotator_date_pub", "dtype": "string"}, {"name": "annotator_normalised_date_pub", "dtype": "string"}, {"name": "annotator_edition_statement", "dtype": "string"}, {"name": "annotator_FAST_genre_terms", "dtype": "string"}, {"name": "annotator_FAST_subject_terms", "dtype": "string"}, {"name": "annotator_comments", "dtype": "string"}, {"name": "annotator_main_language", "dtype": "string"}, {"name": "annotator_other_languages_summaries", "dtype": "string"}, {"name": "annotator_summaries_language", "dtype": "string"}, {"name": "annotator_translation", "dtype": "string"}, {"name": "annotator_original_language", "dtype": "string"}, {"name": "annotator_publisher", "dtype": "string"}, {"name": "annotator_place_pub", "dtype": "string"}, {"name": "annotator_country", "dtype": "string"}, {"name": "annotator_title", "dtype": "string"}, {"name": "Link to digitised book", "dtype": "string"}, {"name": "annotated", "dtype": "bool"}, {"name": "Type of resource", "dtype": {"class_label": {"names": {"0": "Monograph", "1": "Serial", "2": "Monographic component part"}}}}, {"name": "created_at", "dtype": "string"}, {"name": "annotator_genre", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 27518816, "num_examples": 55343}], "download_size": 20111420, "dataset_size": 27518816}]}
2023-06-01T13:59:51+00:00
[]
[ "de", "en", "fr", "nl" ]
TAGS #task_categories-text-classification #task_categories-text-generation #task_categories-fill-mask #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-expert-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-multilingual #size_categories-10K<n<100K #size_categories-1K<n<10K #source_datasets-original #language-German #language-English #language-French #language-Dutch #license-cc0-1.0 #region-us
Dataset Card for blbooksgenre ============================= Table of Contents ----------------- * Dataset Card for blbooksgenre + Table of Contents + Dataset Description - Dataset Summary - Supported Tasks and Leaderboards * Supervised tasks + Languages + Dataset Structure - Data Instances - Data Fields - Data Splits + Dataset Creation - Curation Rationale - Source Data * Initial Data Collection and Normalization * Who are the source language producers? - Annotations * Annotation process * Who are the annotators? - Personal and Sensitive Information + Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases * Colonialism - Other Known Limitations + Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions Dataset Description ------------------- * Homepage:: URL * Repository: URL * Paper: * Leaderboard: * Point of Contact: ### Dataset Summary This dataset consists of metadata relating to books digitised by the British Library in partnership with Microsoft. Some of this metadata was exported from the British Library catalogue whilst others was generated as part of a crowdsourcing project. The text of this book and other metadata can be found on the URL website. The majority of the books in this collection were published in the 18th and 19th Century but the collection also includes a smaller number of books from earlier periods. Items within this collection cover a wide range of subject areas including geography, philosophy, history, poetry and literature and are published in a variety of languages. For the subsection of the data which contains additional crowsourced annotations the date of publication breakdown is as follows: ### Supported Tasks and Leaderboards The digitised books collection which this dataset describes has been used in a variety of digital history and humanities projects since being published. This dataset is suitable for a variety of unsupervised tasks and for a 'genre classification task'. #### Supervised tasks The main possible use case for this dataset is to develop and evaluate 'genre classification' models. The dataset includes human generated labels for whether a book is 'fiction' or 'non-fiction'. This has been used to train models for genre classifcation which predict whether a book is 'fiction' or 'non-fiction' based on its title. ### Languages Dataset Structure ----------------- The dataset currently has three configurations intended to support a range of tasks for which this dataset could be used for: * 'title\_genre\_classifiction' : this creates a de-duplicated version of the dataset with the 'BL record', 'title' and 'label'. * 'annotated\_raw': This version of the dataset includes all fields from the original dataset which are annotated. This includes duplication from different annotators" * 'raw': This version of the dataset includes all the data from the original data including data without annotations. ### Data Instances An example data instance from the 'title\_genre\_classifiction' config: An example data instance from the 'annotated\_raw' config: ### Data Fields The data fields differ slightly between configs. All possible fields for the 'annotated\_raw' config are listed below. For the 'raw' version of the dataset datatypes are usually string to avoid errors when processing missing values. * 'BL record ID': an internal ID used by the British Library, this can be useful for linking this data to other BL collections. * 'Name': name associated with the item (usually author) * 'Dates associated with name': dates associated with above e.g. DOB * 'Type of name': whether 'Name' is a person or an organization etc. * 'Role': i.e. whether 'Name' is 'author', 'publisher' etc. * 'All names': a fuller list of names associated with the item. * 'Title': The title of the work * 'Variant titles' * 'Series title' * 'Number within series' * 'Country of publication': encoded as a list of countries listed in the metadata * 'Place of publication': encoded as a list of places listed in the metadata * 'Publisher' * 'Date of publication': this is encoded as a string since this field can include data ranges i.e.'1850-1855'. * 'Edition' * 'Physical description': encoded as a string since the format of this field varies * 'Dewey classification' * 'BL shelfmark': a British Library shelf mark * 'Topics': topics included in the catalogue record * 'Genre' the genre information included in the original catalogue record note that this is often missing * 'Languages'; encoded as a list of languages * 'Notes': notes from the catalogue record * 'BL record ID for physical resource' The following fields are all generated via the crowdsourcing task (discussed in more detail below) * 'classification\_id': ID for the classification in the annotation task * 'user\_id' ID for the annotator * 'subject\_ids': internal annotation task ID * 'annotator\_date\_pub': an updated publication data * 'annotator\_normalised\_date\_pub': normalized version of the above * 'annotator\_edition\_statement' updated edition * 'annotator\_FAST\_genre\_terms': FAST classification genre terms * 'annotator\_FAST\_subject\_terms': FAST subject terms * 'annotator\_comments': free form comments * 'annotator\_main\_language' * 'annotator\_other\_languages\_summaries' * ''annotator\_summaries\_language' * 'annotator\_translation' * 'annotator\_original\_language' * 'annotator\_publisher' * 'annotator\_place\_pub' * 'annotator\_country' * 'annotator\_title' * 'Link to digitised book' * 'annotated': 'bool' flag to indicate if row has annotations or not * 'created\_at': when the annotation was created * 'annotator\_genre': the updated annotation for the 'genre' of the book. Finally the 'label' field of the 'title\_genre\_classifiction' configuration is a class label with values 0 (Fiction) or 1 (Non-fiction). ### Data Splits This dataset contains a single split 'train'. Dataset Creation ---------------- Note this section is a work in progress. ### Curation Rationale The books in this collection were digitised as part of a project partnership between the British Library and Microsoft. Mass digitisation i.e. projects where there is a goal to quickly digitise large volumes of materials shape the selection of materials to include in a number of ways. Some consideratoins which are often involved in the decision of whether to include items for digitization include (but are not limited to): * copyright status * preservation needs- the size of an item, very large and very small items are often hard to digitize quickly These criteria can have knock-on effects on the makeup of a collection. For example systematically excluding large books may result in some types of book content not being digitized. Large volumes are likely to be correlated to content to at least some extent so excluding them from digitization will mean that material is under represented. Similarly copyright status is often (but not only) determined by publication data. This can often lead to a rapid fall in the number of items in a collection after a certain cut-off date. All of the above is largely to make clear that this collection was not curated with the aim of creating a representative sample of the British Library's holdings. Some material will be over-represented and other under-represented. Similarly, the collection should not be considered a representative sample of what was published across the time period covered by the dataset (nor that that the relative proportions of the data for each time period represent a proportional sample of publications from that period). ### Source Data The original source data (physical items) includes a variety of resources (predominantly monographs) held by the British Library. The British Library is a Legal Deposit library. "Legal deposit requires publishers to provide a copy of every work they publish in the UK to the British Library. It's existed in English law since 1662."source. #### Initial Data Collection and Normalization This version of the dataset was created partially from data exported from British Library catalogue records and partially via data generated from a crowdsourcing task involving British Library staff. #### Who are the source language producers? ### Annotations The data does includes metadata associated with the books these are produced by British Library staff. The additional annotations were carried out during 2020 as part of an internal crowdsourcing task. #### Annotation process New annotations were produced via a crowdsourcing tasks. Annotators have the option to pick titles from a particular language subset from the broader digitized 19th century books collection. As a result the annotations are not random and overrepresent some languages. #### Who are the annotators? Staff working at the British Library. Most of these staff work with metadata as part of their jobs and so could be considered expert annotators. ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- There a range of considerations around using the data. These include the representativeness of the dataset, the bias towards particular languages etc. It is also important to note that library metadata is not static. The metadata held in library catalogues is updated and changed over time for a variety of reasons. The way in which different institutions catalogue items also varies. As a result it is important to evaluate the performance of any models trained on this data before applying to a new collection. ### Social Impact of Dataset ### Discussion of Biases The text in this collection is derived from historic text. As a result the text will reflect to social beliefs and attitudes of this time period. The titles of the book give some sense of their content. Examples of book titles which appear in the data (these are randomly sampled from all titles): * 'Rhymes and Dreams, Legends of Pendle Forest, and other poems', * "Précis of Information concerning the Zulu Country, with a map. Prepared in the Intelligence Branch of the Quarter-Master-General's Department, Horse Guards, War Office, etc", * 'The fan. A poem', * 'Grif; a story of Australian Life', * 'Calypso; a masque: in three acts, etc', * 'Tales Uncle told [With illustrative woodcuts.]', * 'Questings', * 'Home Life on an Ostrich Farm. With ... illustrations', * 'Bulgarya i Bulgarowie', * 'Εἰς τα βαθη της Ἀφρικης [In darkest Africa.] ... Μεταφρασις Γεωρ. Σ. Βουτσινα, etc', * 'The Corsair, a tale', 'Poems ... With notes [With a portrait.]', * 'Report of the Librarian for the year 1898 (1899, 1901, 1909)', * "The World of Thought. A novel. By the author of 'Before I began to speak.'", * 'Amleto; tragedia ... recata in versi italiani da M. Leoni, etc'] Whilst using titles alone, is obviously insufficient to integrate bias in this collection it gives some insight into the topics covered by books in the corpus. Further looking into the tiles highlight some particular types of bias we might find in the collection. This should in no way be considered an exhaustive list. #### Colonialism We can see even in the above random sample of titles examples of colonial attitudes. We can try and interrogate this further by searching for the name of countries which were part of the British Empire at the time many of these books were published. Searching for the string 'India' in the titles and randomly sampling 10 titles returns: * "Travels in India in the Seventeenth Century: by Sir Thomas Roe and Dr. John Fryer. Reprinted from the 'Calcutta Weekly Englishman.'", * 'A Winter in India and Malaysia among the Methodist Missions', * "The Tourist's Guide to all the principal stations on the railways of Northern India [By W. W.] ... Fifth edition", * 'Records of Sport and Military Life in Western India ... With an introduction by ... G. B. Malleson', * "Lakhmi, the Rájpút's Bride. A tale of Gujarát in Western India [A poem.]", * 'The West India Commonplace Book: compiled from parliamentary and official documents; shewing the interest of Great Britain in its Sugar Colonies', * "From Tonkin to India : by the sources of the Irawadi, January '95-January '96", * 'Case of the Ameers of Sinde : speeches of Mr. John Sullivan, and Captain William Eastwick, at a special court held at the India House, ... 26th January, 1844', * 'The Andaman Islands; their colonization, etc. A correspondence addressed to the India Office', * 'Ancient India as described by Ptolemy; being a translation of the chapters which describe India and Eastern Asia in the treatise on Geography written by Klaudios Ptolemaios ... with introduction, commentary, map of India according to Ptolemy, and ... index, by J. W. McCrindle'] Searching form the string 'Africa' in the titles and randomly sampling 10 titles returns: * ['De Benguella ás Terras de Iácca. Descripção de uma viagem na Africa Central e Occidental ... Expedição organisada nos annos de 1877-1880. Edição illustrada', * 'To the New Geographical Society of Edinburgh [An address on Africa by H. M. Stanley.]', * 'Diamonds and Gold in South Africa ... With maps, etc', * 'Missionary Travels and Researches in South Africa ... With notes by F. S. Arnot. With map and illustrations. New edition', * 'A Narrative of a Visit to the Mauritius and South Africa ... Illustrated by two maps, sixteen etchings and twenty-eight wood-cuts', * 'Side Lights on South Africa ... With a map, etc', * 'My Second Journey through Equatorial Africa ... in ... 1886 and 1887 ... Translated ... by M. J. A. Bergmann. With a map ... and ... illustrations, etc', * 'Missionary Travels and Researches in South Africa ... With portrait and fullpage illustrations', * '[African sketches.] Narrative of a residence in South Africa ... A new edition. To which is prefixed a biographical sketch of the author by J. Conder', * 'Lake Ngami; or, Explorations and discoveries during four years wandering in the wilds of South Western Africa ... With a map, and numerous illustrations, etc'] Whilst this dataset doesn't include the underlying text it is important to consider the potential attitudes represented in the title of the books, or the full text if you are using this dataset in conjunction with the full text. ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information The books are licensed under the CC Public Domain Mark 1.0 license. ### Contributions Thanks to @davanstrien for adding this dataset.
[ "### Dataset Summary\n\n\nThis dataset consists of metadata relating to books digitised by the British Library in partnership with Microsoft. Some of this metadata was exported from the British Library catalogue whilst others was generated as part of a crowdsourcing project. The text of this book and other metadata can be found on the URL website.\n\n\nThe majority of the books in this collection were published in the 18th and 19th Century but the collection also includes a smaller number of books from earlier periods. Items within this collection cover a wide range of subject areas including geography, philosophy, history, poetry and literature and are published in a variety of languages.\n\n\nFor the subsection of the data which contains additional crowsourced annotations the date of publication breakdown is as follows:", "### Supported Tasks and Leaderboards\n\n\nThe digitised books collection which this dataset describes has been used in a variety of digital history and humanities projects since being published.\n\n\nThis dataset is suitable for a variety of unsupervised tasks and for a 'genre classification task'.", "#### Supervised tasks\n\n\nThe main possible use case for this dataset is to develop and evaluate 'genre classification' models. The dataset includes human generated labels for whether a book is 'fiction' or 'non-fiction'. This has been used to train models for genre classifcation which predict whether a book is 'fiction' or 'non-fiction' based on its title.", "### Languages\n\n\nDataset Structure\n-----------------\n\n\nThe dataset currently has three configurations intended to support a range of tasks for which this dataset could be used for:\n\n\n* 'title\\_genre\\_classifiction' : this creates a de-duplicated version of the dataset with the 'BL record', 'title' and 'label'.\n* 'annotated\\_raw': This version of the dataset includes all fields from the original dataset which are annotated. This includes duplication from different annotators\"\n* 'raw': This version of the dataset includes all the data from the original data including data without annotations.", "### Data Instances\n\n\nAn example data instance from the 'title\\_genre\\_classifiction' config:\n\n\nAn example data instance from the 'annotated\\_raw' config:", "### Data Fields\n\n\nThe data fields differ slightly between configs. All possible fields for the 'annotated\\_raw' config are listed below. For the 'raw' version of the dataset datatypes are usually string to avoid errors when processing missing values.\n\n\n* 'BL record ID': an internal ID used by the British Library, this can be useful for linking this data to other BL collections.\n* 'Name': name associated with the item (usually author)\n* 'Dates associated with name': dates associated with above e.g. DOB\n* 'Type of name': whether 'Name' is a person or an organization etc.\n* 'Role': i.e. whether 'Name' is 'author', 'publisher' etc.\n* 'All names': a fuller list of names associated with the item.\n* 'Title': The title of the work\n* 'Variant titles'\n* 'Series title'\n* 'Number within series'\n* 'Country of publication': encoded as a list of countries listed in the metadata\n* 'Place of publication': encoded as a list of places listed in the metadata\n* 'Publisher'\n* 'Date of publication': this is encoded as a string since this field can include data ranges i.e.'1850-1855'.\n* 'Edition'\n* 'Physical description': encoded as a string since the format of this field varies\n* 'Dewey classification'\n* 'BL shelfmark': a British Library shelf mark\n* 'Topics': topics included in the catalogue record\n* 'Genre' the genre information included in the original catalogue record note that this is often missing\n* 'Languages'; encoded as a list of languages\n* 'Notes': notes from the catalogue record\n* 'BL record ID for physical resource'\n\n\nThe following fields are all generated via the crowdsourcing task (discussed in more detail below)\n\n\n* 'classification\\_id': ID for the classification in the annotation task\n* 'user\\_id' ID for the annotator\n* 'subject\\_ids': internal annotation task ID\n* 'annotator\\_date\\_pub': an updated publication data\n* 'annotator\\_normalised\\_date\\_pub': normalized version of the above\n* 'annotator\\_edition\\_statement' updated edition\n* 'annotator\\_FAST\\_genre\\_terms': FAST classification genre terms\n* 'annotator\\_FAST\\_subject\\_terms': FAST subject terms\n* 'annotator\\_comments': free form comments\n* 'annotator\\_main\\_language'\n* 'annotator\\_other\\_languages\\_summaries'\n* ''annotator\\_summaries\\_language'\n* 'annotator\\_translation'\n* 'annotator\\_original\\_language'\n* 'annotator\\_publisher'\n* 'annotator\\_place\\_pub'\n* 'annotator\\_country'\n* 'annotator\\_title'\n* 'Link to digitised book'\n* 'annotated': 'bool' flag to indicate if row has annotations or not\n* 'created\\_at': when the annotation was created\n* 'annotator\\_genre': the updated annotation for the 'genre' of the book.\n\n\nFinally the 'label' field of the 'title\\_genre\\_classifiction' configuration is a class label with values 0 (Fiction) or 1 (Non-fiction).", "### Data Splits\n\n\nThis dataset contains a single split 'train'.\n\n\nDataset Creation\n----------------\n\n\nNote this section is a work in progress.", "### Curation Rationale\n\n\nThe books in this collection were digitised as part of a project partnership between the British Library and Microsoft. Mass digitisation i.e. projects where there is a goal to quickly digitise large volumes of materials shape the selection of materials to include in a number of ways. Some consideratoins which are often involved in the decision of whether to include items for digitization include (but are not limited to):\n\n\n* copyright status\n* preservation needs- the size of an item, very large and very small items are often hard to digitize quickly\n\n\nThese criteria can have knock-on effects on the makeup of a collection. For example systematically excluding large books may result in some types of book content not being digitized. Large volumes are likely to be correlated to content to at least some extent so excluding them from digitization will mean that material is under represented. Similarly copyright status is often (but not only) determined by publication data. This can often lead to a rapid fall in the number of items in a collection after a certain cut-off date.\n\n\nAll of the above is largely to make clear that this collection was not curated with the aim of creating a representative sample of the British Library's holdings. Some material will be over-represented and other under-represented. Similarly, the collection should not be considered a representative sample of what was published across the time period covered by the dataset (nor that that the relative proportions of the data for each time period represent a proportional sample of publications from that period).", "### Source Data\n\n\nThe original source data (physical items) includes a variety of resources (predominantly monographs) held by the British Library. The British Library is a Legal Deposit library. \"Legal deposit requires publishers to provide a copy of every work they publish in the UK to the British Library. It's existed in English law since 1662.\"source.", "#### Initial Data Collection and Normalization\n\n\nThis version of the dataset was created partially from data exported from British Library catalogue records and partially via data generated from a crowdsourcing task involving British Library staff.", "#### Who are the source language producers?", "### Annotations\n\n\nThe data does includes metadata associated with the books these are produced by British Library staff. The additional annotations were carried out during 2020 as part of an internal crowdsourcing task.", "#### Annotation process\n\n\nNew annotations were produced via a crowdsourcing tasks. Annotators have the option to pick titles from a particular language subset from the broader digitized 19th century books collection. As a result the annotations are not random and overrepresent some languages.", "#### Who are the annotators?\n\n\nStaff working at the British Library. Most of these staff work with metadata as part of their jobs and so could be considered expert annotators.", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThere a range of considerations around using the data. These include the representativeness of the dataset, the bias towards particular languages etc.\n\n\nIt is also important to note that library metadata is not static. The metadata held in library catalogues is updated and changed over time for a variety of reasons.\n\n\nThe way in which different institutions catalogue items also varies. As a result it is important to evaluate the performance of any models trained on this data before applying to a new collection.", "### Social Impact of Dataset", "### Discussion of Biases\n\n\nThe text in this collection is derived from historic text. As a result the text will reflect to social beliefs and attitudes of this time period. The titles of the book give some sense of their content. Examples of book titles which appear in the data (these are randomly sampled from all titles):\n\n\n* 'Rhymes and Dreams, Legends of Pendle Forest, and other poems',\n* \"Précis of Information concerning the Zulu Country, with a map. Prepared in the Intelligence Branch of the Quarter-Master-General's Department, Horse Guards, War Office, etc\",\n* 'The fan. A poem',\n* 'Grif; a story of Australian Life',\n* 'Calypso; a masque: in three acts, etc',\n* 'Tales Uncle told [With illustrative woodcuts.]',\n* 'Questings',\n* 'Home Life on an Ostrich Farm. With ... illustrations',\n* 'Bulgarya i Bulgarowie',\n* 'Εἰς τα βαθη της Ἀφρικης [In darkest Africa.] ... Μεταφρασις Γεωρ. Σ. Βουτσινα, etc',\n* 'The Corsair, a tale',\n'Poems ... With notes [With a portrait.]',\n* 'Report of the Librarian for the year 1898 (1899, 1901, 1909)',\n* \"The World of Thought. A novel. By the author of 'Before I began to speak.'\",\n* 'Amleto; tragedia ... recata in versi italiani da M. Leoni, etc']\n\n\nWhilst using titles alone, is obviously insufficient to integrate bias in this collection it gives some insight into the topics covered by books in the corpus. Further looking into the tiles highlight some particular types of bias we might find in the collection. This should in no way be considered an exhaustive list.", "#### Colonialism\n\n\nWe can see even in the above random sample of titles examples of colonial attitudes. We can try and interrogate this further by searching for the name of countries which were part of the British Empire at the time many of these books were published.\n\n\nSearching for the string 'India' in the titles and randomly sampling 10 titles returns:\n\n\n* \"Travels in India in the Seventeenth Century: by Sir Thomas Roe and Dr. John Fryer. Reprinted from the 'Calcutta Weekly Englishman.'\",\n* 'A Winter in India and Malaysia among the Methodist Missions',\n* \"The Tourist's Guide to all the principal stations on the railways of Northern India [By W. W.] ... Fifth edition\",\n* 'Records of Sport and Military Life in Western India ... With an introduction by ... G. B. Malleson',\n* \"Lakhmi, the Rájpút's Bride. A tale of Gujarát in Western India [A poem.]\",\n* 'The West India Commonplace Book: compiled from parliamentary and official documents; shewing the interest of Great Britain in its Sugar Colonies',\n* \"From Tonkin to India : by the sources of the Irawadi, January '95-January '96\",\n* 'Case of the Ameers of Sinde : speeches of Mr. John Sullivan, and Captain William Eastwick, at a special court held at the India House, ... 26th January, 1844',\n* 'The Andaman Islands; their colonization, etc. A correspondence addressed to the India Office',\n* 'Ancient India as described by Ptolemy; being a translation of the chapters which describe India and Eastern Asia in the treatise on Geography written by Klaudios Ptolemaios ... with introduction, commentary, map of India according to Ptolemy, and ... index, by J. W. McCrindle']\n\n\nSearching form the string 'Africa' in the titles and randomly sampling 10 titles returns:\n\n\n* ['De Benguella ás Terras de Iácca. Descripção de uma viagem na Africa Central e Occidental ... Expedição organisada nos annos de 1877-1880. Edição illustrada',\n* 'To the New Geographical Society of Edinburgh [An address on Africa by H. M. Stanley.]',\n* 'Diamonds and Gold in South Africa ... With maps, etc',\n* 'Missionary Travels and Researches in South Africa ... With notes by F. S. Arnot. With map and illustrations. New edition',\n* 'A Narrative of a Visit to the Mauritius and South Africa ... Illustrated by two maps, sixteen etchings and twenty-eight wood-cuts',\n* 'Side Lights on South Africa ... With a map, etc',\n* 'My Second Journey through Equatorial Africa ... in ... 1886 and 1887 ... Translated ... by M. J. A. Bergmann. With a map ... and ... illustrations, etc',\n* 'Missionary Travels and Researches in South Africa ... With portrait and fullpage illustrations',\n* '[African sketches.] Narrative of a residence in South Africa ... A new edition. To which is prefixed a biographical sketch of the author by J. Conder',\n* 'Lake Ngami; or, Explorations and discoveries during four years wandering in the wilds of South Western Africa ... With a map, and numerous illustrations, etc']\n\n\nWhilst this dataset doesn't include the underlying text it is important to consider the potential attitudes represented in the title of the books, or the full text if you are using this dataset in conjunction with the full text.", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe books are licensed under the CC Public Domain Mark 1.0 license.", "### Contributions\n\n\nThanks to @davanstrien for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_categories-text-generation #task_categories-fill-mask #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-expert-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-multilingual #size_categories-10K<n<100K #size_categories-1K<n<10K #source_datasets-original #language-German #language-English #language-French #language-Dutch #license-cc0-1.0 #region-us \n", "### Dataset Summary\n\n\nThis dataset consists of metadata relating to books digitised by the British Library in partnership with Microsoft. Some of this metadata was exported from the British Library catalogue whilst others was generated as part of a crowdsourcing project. The text of this book and other metadata can be found on the URL website.\n\n\nThe majority of the books in this collection were published in the 18th and 19th Century but the collection also includes a smaller number of books from earlier periods. Items within this collection cover a wide range of subject areas including geography, philosophy, history, poetry and literature and are published in a variety of languages.\n\n\nFor the subsection of the data which contains additional crowsourced annotations the date of publication breakdown is as follows:", "### Supported Tasks and Leaderboards\n\n\nThe digitised books collection which this dataset describes has been used in a variety of digital history and humanities projects since being published.\n\n\nThis dataset is suitable for a variety of unsupervised tasks and for a 'genre classification task'.", "#### Supervised tasks\n\n\nThe main possible use case for this dataset is to develop and evaluate 'genre classification' models. The dataset includes human generated labels for whether a book is 'fiction' or 'non-fiction'. This has been used to train models for genre classifcation which predict whether a book is 'fiction' or 'non-fiction' based on its title.", "### Languages\n\n\nDataset Structure\n-----------------\n\n\nThe dataset currently has three configurations intended to support a range of tasks for which this dataset could be used for:\n\n\n* 'title\\_genre\\_classifiction' : this creates a de-duplicated version of the dataset with the 'BL record', 'title' and 'label'.\n* 'annotated\\_raw': This version of the dataset includes all fields from the original dataset which are annotated. This includes duplication from different annotators\"\n* 'raw': This version of the dataset includes all the data from the original data including data without annotations.", "### Data Instances\n\n\nAn example data instance from the 'title\\_genre\\_classifiction' config:\n\n\nAn example data instance from the 'annotated\\_raw' config:", "### Data Fields\n\n\nThe data fields differ slightly between configs. All possible fields for the 'annotated\\_raw' config are listed below. For the 'raw' version of the dataset datatypes are usually string to avoid errors when processing missing values.\n\n\n* 'BL record ID': an internal ID used by the British Library, this can be useful for linking this data to other BL collections.\n* 'Name': name associated with the item (usually author)\n* 'Dates associated with name': dates associated with above e.g. DOB\n* 'Type of name': whether 'Name' is a person or an organization etc.\n* 'Role': i.e. whether 'Name' is 'author', 'publisher' etc.\n* 'All names': a fuller list of names associated with the item.\n* 'Title': The title of the work\n* 'Variant titles'\n* 'Series title'\n* 'Number within series'\n* 'Country of publication': encoded as a list of countries listed in the metadata\n* 'Place of publication': encoded as a list of places listed in the metadata\n* 'Publisher'\n* 'Date of publication': this is encoded as a string since this field can include data ranges i.e.'1850-1855'.\n* 'Edition'\n* 'Physical description': encoded as a string since the format of this field varies\n* 'Dewey classification'\n* 'BL shelfmark': a British Library shelf mark\n* 'Topics': topics included in the catalogue record\n* 'Genre' the genre information included in the original catalogue record note that this is often missing\n* 'Languages'; encoded as a list of languages\n* 'Notes': notes from the catalogue record\n* 'BL record ID for physical resource'\n\n\nThe following fields are all generated via the crowdsourcing task (discussed in more detail below)\n\n\n* 'classification\\_id': ID for the classification in the annotation task\n* 'user\\_id' ID for the annotator\n* 'subject\\_ids': internal annotation task ID\n* 'annotator\\_date\\_pub': an updated publication data\n* 'annotator\\_normalised\\_date\\_pub': normalized version of the above\n* 'annotator\\_edition\\_statement' updated edition\n* 'annotator\\_FAST\\_genre\\_terms': FAST classification genre terms\n* 'annotator\\_FAST\\_subject\\_terms': FAST subject terms\n* 'annotator\\_comments': free form comments\n* 'annotator\\_main\\_language'\n* 'annotator\\_other\\_languages\\_summaries'\n* ''annotator\\_summaries\\_language'\n* 'annotator\\_translation'\n* 'annotator\\_original\\_language'\n* 'annotator\\_publisher'\n* 'annotator\\_place\\_pub'\n* 'annotator\\_country'\n* 'annotator\\_title'\n* 'Link to digitised book'\n* 'annotated': 'bool' flag to indicate if row has annotations or not\n* 'created\\_at': when the annotation was created\n* 'annotator\\_genre': the updated annotation for the 'genre' of the book.\n\n\nFinally the 'label' field of the 'title\\_genre\\_classifiction' configuration is a class label with values 0 (Fiction) or 1 (Non-fiction).", "### Data Splits\n\n\nThis dataset contains a single split 'train'.\n\n\nDataset Creation\n----------------\n\n\nNote this section is a work in progress.", "### Curation Rationale\n\n\nThe books in this collection were digitised as part of a project partnership between the British Library and Microsoft. Mass digitisation i.e. projects where there is a goal to quickly digitise large volumes of materials shape the selection of materials to include in a number of ways. Some consideratoins which are often involved in the decision of whether to include items for digitization include (but are not limited to):\n\n\n* copyright status\n* preservation needs- the size of an item, very large and very small items are often hard to digitize quickly\n\n\nThese criteria can have knock-on effects on the makeup of a collection. For example systematically excluding large books may result in some types of book content not being digitized. Large volumes are likely to be correlated to content to at least some extent so excluding them from digitization will mean that material is under represented. Similarly copyright status is often (but not only) determined by publication data. This can often lead to a rapid fall in the number of items in a collection after a certain cut-off date.\n\n\nAll of the above is largely to make clear that this collection was not curated with the aim of creating a representative sample of the British Library's holdings. Some material will be over-represented and other under-represented. Similarly, the collection should not be considered a representative sample of what was published across the time period covered by the dataset (nor that that the relative proportions of the data for each time period represent a proportional sample of publications from that period).", "### Source Data\n\n\nThe original source data (physical items) includes a variety of resources (predominantly monographs) held by the British Library. The British Library is a Legal Deposit library. \"Legal deposit requires publishers to provide a copy of every work they publish in the UK to the British Library. It's existed in English law since 1662.\"source.", "#### Initial Data Collection and Normalization\n\n\nThis version of the dataset was created partially from data exported from British Library catalogue records and partially via data generated from a crowdsourcing task involving British Library staff.", "#### Who are the source language producers?", "### Annotations\n\n\nThe data does includes metadata associated with the books these are produced by British Library staff. The additional annotations were carried out during 2020 as part of an internal crowdsourcing task.", "#### Annotation process\n\n\nNew annotations were produced via a crowdsourcing tasks. Annotators have the option to pick titles from a particular language subset from the broader digitized 19th century books collection. As a result the annotations are not random and overrepresent some languages.", "#### Who are the annotators?\n\n\nStaff working at the British Library. Most of these staff work with metadata as part of their jobs and so could be considered expert annotators.", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThere a range of considerations around using the data. These include the representativeness of the dataset, the bias towards particular languages etc.\n\n\nIt is also important to note that library metadata is not static. The metadata held in library catalogues is updated and changed over time for a variety of reasons.\n\n\nThe way in which different institutions catalogue items also varies. As a result it is important to evaluate the performance of any models trained on this data before applying to a new collection.", "### Social Impact of Dataset", "### Discussion of Biases\n\n\nThe text in this collection is derived from historic text. As a result the text will reflect to social beliefs and attitudes of this time period. The titles of the book give some sense of their content. Examples of book titles which appear in the data (these are randomly sampled from all titles):\n\n\n* 'Rhymes and Dreams, Legends of Pendle Forest, and other poems',\n* \"Précis of Information concerning the Zulu Country, with a map. Prepared in the Intelligence Branch of the Quarter-Master-General's Department, Horse Guards, War Office, etc\",\n* 'The fan. A poem',\n* 'Grif; a story of Australian Life',\n* 'Calypso; a masque: in three acts, etc',\n* 'Tales Uncle told [With illustrative woodcuts.]',\n* 'Questings',\n* 'Home Life on an Ostrich Farm. With ... illustrations',\n* 'Bulgarya i Bulgarowie',\n* 'Εἰς τα βαθη της Ἀφρικης [In darkest Africa.] ... Μεταφρασις Γεωρ. Σ. Βουτσινα, etc',\n* 'The Corsair, a tale',\n'Poems ... With notes [With a portrait.]',\n* 'Report of the Librarian for the year 1898 (1899, 1901, 1909)',\n* \"The World of Thought. A novel. By the author of 'Before I began to speak.'\",\n* 'Amleto; tragedia ... recata in versi italiani da M. Leoni, etc']\n\n\nWhilst using titles alone, is obviously insufficient to integrate bias in this collection it gives some insight into the topics covered by books in the corpus. Further looking into the tiles highlight some particular types of bias we might find in the collection. This should in no way be considered an exhaustive list.", "#### Colonialism\n\n\nWe can see even in the above random sample of titles examples of colonial attitudes. We can try and interrogate this further by searching for the name of countries which were part of the British Empire at the time many of these books were published.\n\n\nSearching for the string 'India' in the titles and randomly sampling 10 titles returns:\n\n\n* \"Travels in India in the Seventeenth Century: by Sir Thomas Roe and Dr. John Fryer. Reprinted from the 'Calcutta Weekly Englishman.'\",\n* 'A Winter in India and Malaysia among the Methodist Missions',\n* \"The Tourist's Guide to all the principal stations on the railways of Northern India [By W. W.] ... Fifth edition\",\n* 'Records of Sport and Military Life in Western India ... With an introduction by ... G. B. Malleson',\n* \"Lakhmi, the Rájpút's Bride. A tale of Gujarát in Western India [A poem.]\",\n* 'The West India Commonplace Book: compiled from parliamentary and official documents; shewing the interest of Great Britain in its Sugar Colonies',\n* \"From Tonkin to India : by the sources of the Irawadi, January '95-January '96\",\n* 'Case of the Ameers of Sinde : speeches of Mr. John Sullivan, and Captain William Eastwick, at a special court held at the India House, ... 26th January, 1844',\n* 'The Andaman Islands; their colonization, etc. A correspondence addressed to the India Office',\n* 'Ancient India as described by Ptolemy; being a translation of the chapters which describe India and Eastern Asia in the treatise on Geography written by Klaudios Ptolemaios ... with introduction, commentary, map of India according to Ptolemy, and ... index, by J. W. McCrindle']\n\n\nSearching form the string 'Africa' in the titles and randomly sampling 10 titles returns:\n\n\n* ['De Benguella ás Terras de Iácca. Descripção de uma viagem na Africa Central e Occidental ... Expedição organisada nos annos de 1877-1880. Edição illustrada',\n* 'To the New Geographical Society of Edinburgh [An address on Africa by H. M. Stanley.]',\n* 'Diamonds and Gold in South Africa ... With maps, etc',\n* 'Missionary Travels and Researches in South Africa ... With notes by F. S. Arnot. With map and illustrations. New edition',\n* 'A Narrative of a Visit to the Mauritius and South Africa ... Illustrated by two maps, sixteen etchings and twenty-eight wood-cuts',\n* 'Side Lights on South Africa ... With a map, etc',\n* 'My Second Journey through Equatorial Africa ... in ... 1886 and 1887 ... Translated ... by M. J. A. Bergmann. With a map ... and ... illustrations, etc',\n* 'Missionary Travels and Researches in South Africa ... With portrait and fullpage illustrations',\n* '[African sketches.] Narrative of a residence in South Africa ... A new edition. To which is prefixed a biographical sketch of the author by J. Conder',\n* 'Lake Ngami; or, Explorations and discoveries during four years wandering in the wilds of South Western Africa ... With a map, and numerous illustrations, etc']\n\n\nWhilst this dataset doesn't include the underlying text it is important to consider the potential attitudes represented in the title of the books, or the full text if you are using this dataset in conjunction with the full text.", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe books are licensed under the CC Public Domain Mark 1.0 license.", "### Contributions\n\n\nThanks to @davanstrien for adding this dataset." ]
[ 187, 171, 64, 90, 147, 44, 822, 33, 332, 82, 48, 10, 42, 64, 39, 126, 7, 438, 849, 14, 6, 20, 18 ]
[ "passage: TAGS\n#task_categories-text-classification #task_categories-text-generation #task_categories-fill-mask #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-expert-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-multilingual #size_categories-10K<n<100K #size_categories-1K<n<10K #source_datasets-original #language-German #language-English #language-French #language-Dutch #license-cc0-1.0 #region-us \n### Dataset Summary\n\n\nThis dataset consists of metadata relating to books digitised by the British Library in partnership with Microsoft. Some of this metadata was exported from the British Library catalogue whilst others was generated as part of a crowdsourcing project. The text of this book and other metadata can be found on the URL website.\n\n\nThe majority of the books in this collection were published in the 18th and 19th Century but the collection also includes a smaller number of books from earlier periods. Items within this collection cover a wide range of subject areas including geography, philosophy, history, poetry and literature and are published in a variety of languages.\n\n\nFor the subsection of the data which contains additional crowsourced annotations the date of publication breakdown is as follows:### Supported Tasks and Leaderboards\n\n\nThe digitised books collection which this dataset describes has been used in a variety of digital history and humanities projects since being published.\n\n\nThis dataset is suitable for a variety of unsupervised tasks and for a 'genre classification task'.", "passage: #### Supervised tasks\n\n\nThe main possible use case for this dataset is to develop and evaluate 'genre classification' models. The dataset includes human generated labels for whether a book is 'fiction' or 'non-fiction'. This has been used to train models for genre classifcation which predict whether a book is 'fiction' or 'non-fiction' based on its title.### Languages\n\n\nDataset Structure\n-----------------\n\n\nThe dataset currently has three configurations intended to support a range of tasks for which this dataset could be used for:\n\n\n* 'title\\_genre\\_classifiction' : this creates a de-duplicated version of the dataset with the 'BL record', 'title' and 'label'.\n* 'annotated\\_raw': This version of the dataset includes all fields from the original dataset which are annotated. This includes duplication from different annotators\"\n* 'raw': This version of the dataset includes all the data from the original data including data without annotations.### Data Instances\n\n\nAn example data instance from the 'title\\_genre\\_classifiction' config:\n\n\nAn example data instance from the 'annotated\\_raw' config:", "passage: ### Data Fields\n\n\nThe data fields differ slightly between configs. All possible fields for the 'annotated\\_raw' config are listed below. For the 'raw' version of the dataset datatypes are usually string to avoid errors when processing missing values.\n\n\n* 'BL record ID': an internal ID used by the British Library, this can be useful for linking this data to other BL collections.\n* 'Name': name associated with the item (usually author)\n* 'Dates associated with name': dates associated with above e.g. DOB\n* 'Type of name': whether 'Name' is a person or an organization etc.\n* 'Role': i.e. whether 'Name' is 'author', 'publisher' etc.\n* 'All names': a fuller list of names associated with the item.\n* 'Title': The title of the work\n* 'Variant titles'\n* 'Series title'\n* 'Number within series'\n* 'Country of publication': encoded as a list of countries listed in the metadata\n* 'Place of publication': encoded as a list of places listed in the metadata\n* 'Publisher'\n* 'Date of publication': this is encoded as a string since this field can include data ranges i.e.'1850-1855'.\n* 'Edition'\n* 'Physical description': encoded as a string since the format of this field varies\n* 'Dewey classification'\n* 'BL shelfmark': a British Library shelf mark\n* 'Topics': topics included in the catalogue record\n* 'Genre' the genre information included in the original catalogue record note that this is often missing\n* 'Languages'; encoded as a list of languages\n* 'Notes': notes from the catalogue record\n* 'BL record ID for physical resource'\n\n\nThe following fields are all generated via the crowdsourcing task (discussed in more detail below)\n\n\n* 'classification\\_id': ID for the classification in the annotation task\n* 'user\\_id' ID for the annotator\n* 'subject\\_ids': internal annotation task ID\n* 'annotator\\_date\\_pub': an updated publication data\n* 'annotator\\_normalised\\_date\\_pub': normalized version of the above\n* 'annotator\\_edition\\_statement' updated edition\n* 'annotator\\_FAST\\_genre\\_terms': FAST classification genre terms\n* 'annotator\\_FAST\\_subject\\_terms': FAST subject terms\n* 'annotator\\_comments': free form comments\n* 'annotator\\_main\\_language'\n* 'annotator\\_other\\_languages\\_summaries'\n* ''annotator\\_summaries\\_language'\n* 'annotator\\_translation'\n* 'annotator\\_original\\_language'\n* 'annotator\\_publisher'\n* 'annotator\\_place\\_pub'\n* 'annotator\\_country'\n* 'annotator\\_title'\n* 'Link to digitised book'\n* 'annotated': 'bool' flag to indicate if row has annotations or not\n* 'created\\_at': when the annotation was created\n* 'annotator\\_genre': the updated annotation for the 'genre' of the book.\n\n\nFinally the 'label' field of the 'title\\_genre\\_classifiction' configuration is a class label with values 0 (Fiction) or 1 (Non-fiction).### Data Splits\n\n\nThis dataset contains a single split 'train'.\n\n\nDataset Creation\n----------------\n\n\nNote this section is a work in progress.### Curation Rationale\n\n\nThe books in this collection were digitised as part of a project partnership between the British Library and Microsoft. Mass digitisation i.e. projects where there is a goal to quickly digitise large volumes of materials shape the selection of materials to include in a number of ways. Some consideratoins which are often involved in the decision of whether to include items for digitization include (but are not limited to):\n\n\n* copyright status\n* preservation needs- the size of an item, very large and very small items are often hard to digitize quickly\n\n\nThese criteria can have knock-on effects on the makeup of a collection. For example systematically excluding large books may result in some types of book content not being digitized. Large volumes are likely to be correlated to content to at least some extent so excluding them from digitization will mean that material is under represented. Similarly copyright status is often (but not only) determined by publication data. This can often lead to a rapid fall in the number of items in a collection after a certain cut-off date.\n\n\nAll of the above is largely to make clear that this collection was not curated with the aim of creating a representative sample of the British Library's holdings. Some material will be over-represented and other under-represented. Similarly, the collection should not be considered a representative sample of what was published across the time period covered by the dataset (nor that that the relative proportions of the data for each time period represent a proportional sample of publications from that period).### Source Data\n\n\nThe original source data (physical items) includes a variety of resources (predominantly monographs) held by the British Library. The British Library is a Legal Deposit library. \"Legal deposit requires publishers to provide a copy of every work they publish in the UK to the British Library. It's existed in English law since 1662.\"source.#### Initial Data Collection and Normalization\n\n\nThis version of the dataset was created partially from data exported from British Library catalogue records and partially via data generated from a crowdsourcing task involving British Library staff.#### Who are the source language producers?", "passage: ### Annotations\n\n\nThe data does includes metadata associated with the books these are produced by British Library staff. The additional annotations were carried out during 2020 as part of an internal crowdsourcing task.#### Annotation process\n\n\nNew annotations were produced via a crowdsourcing tasks. Annotators have the option to pick titles from a particular language subset from the broader digitized 19th century books collection. As a result the annotations are not random and overrepresent some languages.#### Who are the annotators?\n\n\nStaff working at the British Library. Most of these staff work with metadata as part of their jobs and so could be considered expert annotators.### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThere a range of considerations around using the data. These include the representativeness of the dataset, the bias towards particular languages etc.\n\n\nIt is also important to note that library metadata is not static. The metadata held in library catalogues is updated and changed over time for a variety of reasons.\n\n\nThe way in which different institutions catalogue items also varies. As a result it is important to evaluate the performance of any models trained on this data before applying to a new collection.### Social Impact of Dataset", "passage: ### Discussion of Biases\n\n\nThe text in this collection is derived from historic text. As a result the text will reflect to social beliefs and attitudes of this time period. The titles of the book give some sense of their content. Examples of book titles which appear in the data (these are randomly sampled from all titles):\n\n\n* 'Rhymes and Dreams, Legends of Pendle Forest, and other poems',\n* \"Précis of Information concerning the Zulu Country, with a map. Prepared in the Intelligence Branch of the Quarter-Master-General's Department, Horse Guards, War Office, etc\",\n* 'The fan. A poem',\n* 'Grif; a story of Australian Life',\n* 'Calypso; a masque: in three acts, etc',\n* 'Tales Uncle told [With illustrative woodcuts.]',\n* 'Questings',\n* 'Home Life on an Ostrich Farm. With ... illustrations',\n* 'Bulgarya i Bulgarowie',\n* 'Εἰς τα βαθη της Ἀφρικης [In darkest Africa.] ... Μεταφρασις Γεωρ. Σ. Βουτσινα, etc',\n* 'The Corsair, a tale',\n'Poems ... With notes [With a portrait.]',\n* 'Report of the Librarian for the year 1898 (1899, 1901, 1909)',\n* \"The World of Thought. A novel. By the author of 'Before I began to speak.'\",\n* 'Amleto; tragedia ... recata in versi italiani da M. Leoni, etc']\n\n\nWhilst using titles alone, is obviously insufficient to integrate bias in this collection it gives some insight into the topics covered by books in the corpus. Further looking into the tiles highlight some particular types of bias we might find in the collection. This should in no way be considered an exhaustive list." ]
d7b0093243439fa5f0cd9663125cc47575ced2ea
# Dataset Card for "blended_skill_talk" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://parl.ai/projects/bst/](https://parl.ai/projects/bst/) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [Can You Put it All Together: Evaluating Conversational Agents' Ability to Blend Skills](https://arxiv.org/abs/2004.08449v1) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 38.11 MB - **Size of the generated dataset:** 15.08 MB - **Total amount of disk used:** 53.17 MB ### Dataset Summary A dataset of 7k conversations explicitly designed to exhibit multiple conversation modes: displaying personality, having empathy, and demonstrating knowledge. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 38.11 MB - **Size of the generated dataset:** 15.08 MB - **Total amount of disk used:** 53.17 MB An example of 'train' looks as follows. ``` { 'personas': ['my parents don t really speak english , but i speak italian and english.', 'i have three children.'], 'additional_context': 'Backstreet Boys', 'previous_utterance': ['Oh, I am a BIG fan of the Backstreet Boys! Have you ever seen them performing live?', "No,I listen to their music a lot, mainly the unbreakable which is the Backstreet Boys' sixth studio album. "], 'context': 'wizard_of_wikipedia', 'free_messages': ['you are very knowledgeable, do you prefer nsync or bsb?', "haha kids of this days don't know them, i'm 46 and i still enjoying them, my kids only listen k-pop", "italian?haha that's strange, i only talk english and a little spanish "], 'guided_messages': ["i don't have a preference, they are both great. All 3 of my kids get annoyed when I listen to them though.", 'Sometimes I sing their songs in Italian, that really annoys them lol.', 'My parents barely speak English, so I was taught both. By the way, what is k-pop?'], 'suggestions': {'convai2': ["i don't have a preference , both are pretty . do you have any hobbies ?", "do they the backstreet boys ? that's my favorite group .", 'are your kids interested in music ?'], 'empathetic_dialogues': ['I actually just discovered Imagine Dragons. I love them!', "Hahaha that just goes to show ya, age is just a umber!'", 'That would be hard! Do you now Spanish well?'], 'wizard_of_wikipedia': ['NSYNC Also had Lance Bass and Joey Fatone, sometimes called the Fat One.', 'Yes, there are a few K-Pop songs that I have heard good big in the USA. It is the most popular in South Korea and has Western elements of pop.', 'English, beleive it or not.']}, 'guided_chosen_suggestions': ['convai2', '', ''], 'label_candidates': []} ``` ### Data Fields The data fields are the same among all splits. #### default - `personas`: a `list` of `string` features. - `additional_context`: a `string` feature. - `previous_utterance`: a `list` of `string` features. - `context`: a `string` feature. - `free_messages`: a `list` of `string` features. - `guided_messgaes`: a `list` of `string` features. - `suggestions`: a dictionary feature containing: - `convai2`: a `string` feature. - `empathetic_dialogues`: a `string` feature. - `wizard_of_wikipedia`: a `string` feature. - `guided_chosen_suggestions`: a `list` of `string` features. - `label_candidates`: a `list` of `lists` of `string` features. ### Data Splits | name |train|validation|test| |-------|----:|---------:|---:| |default| 4819| 1009| 980| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @misc{smith2020evaluating, title={Can You Put it All Together: Evaluating Conversational Agents' Ability to Blend Skills}, author={Eric Michael Smith and Mary Williamson and Kurt Shuster and Jason Weston and Y-Lan Boureau}, year={2020}, eprint={2004.08449}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.
blended_skill_talk
[ "task_categories:conversational", "task_ids:dialogue-generation", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:unknown", "arxiv:2004.08449", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["conversational"], "task_ids": ["dialogue-generation"], "paperswithcode_id": "blended-skill-talk", "pretty_name": "BlendedSkillTalk", "dataset_info": {"features": [{"name": "personas", "sequence": "string"}, {"name": "additional_context", "dtype": "string"}, {"name": "previous_utterance", "sequence": "string"}, {"name": "context", "dtype": "string"}, {"name": "free_messages", "sequence": "string"}, {"name": "guided_messages", "sequence": "string"}, {"name": "suggestions", "sequence": [{"name": "convai2", "dtype": "string"}, {"name": "empathetic_dialogues", "dtype": "string"}, {"name": "wizard_of_wikipedia", "dtype": "string"}]}, {"name": "guided_chosen_suggestions", "sequence": "string"}, {"name": "label_candidates", "sequence": {"sequence": "string"}}], "splits": [{"name": "train", "num_bytes": 10830670, "num_examples": 4819}, {"name": "validation", "num_bytes": 43961447, "num_examples": 1009}, {"name": "test", "num_bytes": 44449895, "num_examples": 980}], "download_size": 10897644, "dataset_size": 99242012}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}]}
2024-01-10T10:22:26+00:00
[ "2004.08449" ]
[ "en" ]
TAGS #task_categories-conversational #task_ids-dialogue-generation #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-unknown #arxiv-2004.08449 #region-us
Dataset Card for "blended\_skill\_talk" ======================================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: Can You Put it All Together: Evaluating Conversational Agents' Ability to Blend Skills * Point of Contact: * Size of downloaded dataset files: 38.11 MB * Size of the generated dataset: 15.08 MB * Total amount of disk used: 53.17 MB ### Dataset Summary A dataset of 7k conversations explicitly designed to exhibit multiple conversation modes: displaying personality, having empathy, and demonstrating knowledge. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### default * Size of downloaded dataset files: 38.11 MB * Size of the generated dataset: 15.08 MB * Total amount of disk used: 53.17 MB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### default * 'personas': a 'list' of 'string' features. * 'additional\_context': a 'string' feature. * 'previous\_utterance': a 'list' of 'string' features. * 'context': a 'string' feature. * 'free\_messages': a 'list' of 'string' features. * 'guided\_messgaes': a 'list' of 'string' features. * 'suggestions': a dictionary feature containing: + 'convai2': a 'string' feature. + 'empathetic\_dialogues': a 'string' feature. + 'wizard\_of\_wikipedia': a 'string' feature. * 'guided\_chosen\_suggestions': a 'list' of 'string' features. * 'label\_candidates': a 'list' of 'lists' of 'string' features. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @lewtun, @thomwolf, @lhoestq, @patrickvonplaten, @mariamabarham for adding this dataset.
[ "### Dataset Summary\n\n\nA dataset of 7k conversations explicitly designed to exhibit multiple conversation modes: displaying personality, having empathy, and demonstrating knowledge.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 38.11 MB\n* Size of the generated dataset: 15.08 MB\n* Total amount of disk used: 53.17 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'personas': a 'list' of 'string' features.\n* 'additional\\_context': a 'string' feature.\n* 'previous\\_utterance': a 'list' of 'string' features.\n* 'context': a 'string' feature.\n* 'free\\_messages': a 'list' of 'string' features.\n* 'guided\\_messgaes': a 'list' of 'string' features.\n* 'suggestions': a dictionary feature containing:\n\t+ 'convai2': a 'string' feature.\n\t+ 'empathetic\\_dialogues': a 'string' feature.\n\t+ 'wizard\\_of\\_wikipedia': a 'string' feature.\n* 'guided\\_chosen\\_suggestions': a 'list' of 'string' features.\n* 'label\\_candidates': a 'list' of 'lists' of 'string' features.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @lewtun, @thomwolf, @lhoestq, @patrickvonplaten, @mariamabarham for adding this dataset." ]
[ "TAGS\n#task_categories-conversational #task_ids-dialogue-generation #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-unknown #arxiv-2004.08449 #region-us \n", "### Dataset Summary\n\n\nA dataset of 7k conversations explicitly designed to exhibit multiple conversation modes: displaying personality, having empathy, and demonstrating knowledge.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 38.11 MB\n* Size of the generated dataset: 15.08 MB\n* Total amount of disk used: 53.17 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'personas': a 'list' of 'string' features.\n* 'additional\\_context': a 'string' feature.\n* 'previous\\_utterance': a 'list' of 'string' features.\n* 'context': a 'string' feature.\n* 'free\\_messages': a 'list' of 'string' features.\n* 'guided\\_messgaes': a 'list' of 'string' features.\n* 'suggestions': a dictionary feature containing:\n\t+ 'convai2': a 'string' feature.\n\t+ 'empathetic\\_dialogues': a 'string' feature.\n\t+ 'wizard\\_of\\_wikipedia': a 'string' feature.\n* 'guided\\_chosen\\_suggestions': a 'list' of 'string' features.\n* 'label\\_candidates': a 'list' of 'lists' of 'string' features.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @lewtun, @thomwolf, @lhoestq, @patrickvonplaten, @mariamabarham for adding this dataset." ]
[ 100, 39, 10, 11, 6, 51, 17, 225, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 39 ]
[ "passage: TAGS\n#task_categories-conversational #task_ids-dialogue-generation #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-unknown #arxiv-2004.08449 #region-us \n### Dataset Summary\n\n\nA dataset of 7k conversations explicitly designed to exhibit multiple conversation modes: displaying personality, having empathy, and demonstrating knowledge.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### default\n\n\n* Size of downloaded dataset files: 38.11 MB\n* Size of the generated dataset: 15.08 MB\n* Total amount of disk used: 53.17 MB\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### default\n\n\n* 'personas': a 'list' of 'string' features.\n* 'additional\\_context': a 'string' feature.\n* 'previous\\_utterance': a 'list' of 'string' features.\n* 'context': a 'string' feature.\n* 'free\\_messages': a 'list' of 'string' features.\n* 'guided\\_messgaes': a 'list' of 'string' features.\n* 'suggestions': a dictionary feature containing:\n\t+ 'convai2': a 'string' feature.\n\t+ 'empathetic\\_dialogues': a 'string' feature.\n\t+ 'wizard\\_of\\_wikipedia': a 'string' feature.\n* 'guided\\_chosen\\_suggestions': a 'list' of 'string' features.\n* 'label\\_candidates': a 'list' of 'lists' of 'string' features.### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations" ]
877fba0801ffb7cbd8c39c1ff314a46f053f6036
# Dataset Card for "blimp" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** https://github.com/alexwarstadt/blimp - **Paper:** [BLiMP: The Benchmark of Linguistic Minimal Pairs for English](https://doi.org/10.1162/tacl_a_00321) - **Paper:** https://arxiv.org/abs/1912.00582 - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 29.58 MB - **Size of the generated dataset:** 11.45 MB - **Total amount of disk used:** 41.03 MB ### Dataset Summary BLiMP is a challenge set for evaluating what language models (LMs) know about major grammatical phenomena in English. BLiMP consists of 67 sub-datasets, each containing 1000 minimal pairs isolating specific contrasts in syntax, morphology, or semantics. The data is automatically generated according to expert-crafted grammars. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### adjunct_island - **Size of downloaded dataset files:** 0.36 MB - **Size of the generated dataset:** 0.17 MB - **Total amount of disk used:** 0.52 MB An example of 'train' looks as follows. ``` { "UID": "tough_vs_raising_1", "field": "syntax_semantics", "lexically_identical": false, "linguistics_term": "control_raising", "one_prefix_method": false, "pair_id": 2, "sentence_bad": "Benjamin's tutor was certain to boast about.", "sentence_good": "Benjamin's tutor was easy to boast about.", "simple_LM_method": true, "two_prefix_method": false } ``` #### anaphor_gender_agreement - **Size of downloaded dataset files:** 0.44 MB - **Size of the generated dataset:** 0.14 MB - **Total amount of disk used:** 0.57 MB An example of 'train' looks as follows. ``` { "UID": "tough_vs_raising_1", "field": "syntax_semantics", "lexically_identical": false, "linguistics_term": "control_raising", "one_prefix_method": false, "pair_id": 2, "sentence_bad": "Benjamin's tutor was certain to boast about.", "sentence_good": "Benjamin's tutor was easy to boast about.", "simple_LM_method": true, "two_prefix_method": false } ``` #### anaphor_number_agreement - **Size of downloaded dataset files:** 0.45 MB - **Size of the generated dataset:** 0.14 MB - **Total amount of disk used:** 0.59 MB An example of 'train' looks as follows. ``` { "UID": "tough_vs_raising_1", "field": "syntax_semantics", "lexically_identical": false, "linguistics_term": "control_raising", "one_prefix_method": false, "pair_id": 2, "sentence_bad": "Benjamin's tutor was certain to boast about.", "sentence_good": "Benjamin's tutor was easy to boast about.", "simple_LM_method": true, "two_prefix_method": false } ``` #### animate_subject_passive - **Size of downloaded dataset files:** 0.46 MB - **Size of the generated dataset:** 0.15 MB - **Total amount of disk used:** 0.61 MB An example of 'train' looks as follows. ``` { "UID": "tough_vs_raising_1", "field": "syntax_semantics", "lexically_identical": false, "linguistics_term": "control_raising", "one_prefix_method": false, "pair_id": 2, "sentence_bad": "Benjamin's tutor was certain to boast about.", "sentence_good": "Benjamin's tutor was easy to boast about.", "simple_LM_method": true, "two_prefix_method": false } ``` #### animate_subject_trans - **Size of downloaded dataset files:** 0.43 MB - **Size of the generated dataset:** 0.13 MB - **Total amount of disk used:** 0.57 MB An example of 'train' looks as follows. ``` { "UID": "tough_vs_raising_1", "field": "syntax_semantics", "lexically_identical": false, "linguistics_term": "control_raising", "one_prefix_method": false, "pair_id": 2, "sentence_bad": "Benjamin's tutor was certain to boast about.", "sentence_good": "Benjamin's tutor was easy to boast about.", "simple_LM_method": true, "two_prefix_method": false } ``` ### Data Fields The data fields are the same among all splits. #### adjunct_island - `sentence_good`: a `string` feature. - `sentence_bad`: a `string` feature. - `field`: a `string` feature. - `linguistics_term`: a `string` feature. - `UID`: a `string` feature. - `simple_LM_method`: a `bool` feature. - `one_prefix_method`: a `bool` feature. - `two_prefix_method`: a `bool` feature. - `lexically_identical`: a `bool` feature. - `pair_id`: a `int32` feature. #### anaphor_gender_agreement - `sentence_good`: a `string` feature. - `sentence_bad`: a `string` feature. - `field`: a `string` feature. - `linguistics_term`: a `string` feature. - `UID`: a `string` feature. - `simple_LM_method`: a `bool` feature. - `one_prefix_method`: a `bool` feature. - `two_prefix_method`: a `bool` feature. - `lexically_identical`: a `bool` feature. - `pair_id`: a `int32` feature. #### anaphor_number_agreement - `sentence_good`: a `string` feature. - `sentence_bad`: a `string` feature. - `field`: a `string` feature. - `linguistics_term`: a `string` feature. - `UID`: a `string` feature. - `simple_LM_method`: a `bool` feature. - `one_prefix_method`: a `bool` feature. - `two_prefix_method`: a `bool` feature. - `lexically_identical`: a `bool` feature. - `pair_id`: a `int32` feature. #### animate_subject_passive - `sentence_good`: a `string` feature. - `sentence_bad`: a `string` feature. - `field`: a `string` feature. - `linguistics_term`: a `string` feature. - `UID`: a `string` feature. - `simple_LM_method`: a `bool` feature. - `one_prefix_method`: a `bool` feature. - `two_prefix_method`: a `bool` feature. - `lexically_identical`: a `bool` feature. - `pair_id`: a `int32` feature. #### animate_subject_trans - `sentence_good`: a `string` feature. - `sentence_bad`: a `string` feature. - `field`: a `string` feature. - `linguistics_term`: a `string` feature. - `UID`: a `string` feature. - `simple_LM_method`: a `bool` feature. - `one_prefix_method`: a `bool` feature. - `two_prefix_method`: a `bool` feature. - `lexically_identical`: a `bool` feature. - `pair_id`: a `int32` feature. ### Data Splits | name |train| |------------------------|----:| |adjunct_island | 1000| |anaphor_gender_agreement| 1000| |anaphor_number_agreement| 1000| |animate_subject_passive | 1000| |animate_subject_trans | 1000| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information BLiMP is distributed under a [CC-BY](https://creativecommons.org/licenses/by/4.0/) license. Source: https://github.com/alexwarstadt/blimp#license ### Citation Information ``` @article{warstadt2020blimp, author = {Warstadt, Alex and Parrish, Alicia and Liu, Haokun and Mohananey, Anhad and Peng, Wei and Wang, Sheng-Fu and Bowman, Samuel R.}, title = {BLiMP: The Benchmark of Linguistic Minimal Pairs for English}, journal = {Transactions of the Association for Computational Linguistics}, volume = {8}, number = {}, pages = {377-392}, year = {2020}, doi = {10.1162/tacl\_a\_00321}, URL = {https://doi.org/10.1162/tacl_a_00321}, eprint = {https://doi.org/10.1162/tacl_a_00321}, abstract = { We introduce The Benchmark of Linguistic Minimal Pairs (BLiMP),1 a challenge set for evaluating the linguistic knowledge of language models (LMs) on major grammatical phenomena in English. BLiMP consists of 67 individual datasets, each containing 1,000 minimal pairs—that is, pairs of minimally different sentences that contrast in grammatical acceptability and isolate specific phenomenon in syntax, morphology, or semantics. We generate the data according to linguist-crafted grammar templates, and human aggregate agreement with the labels is 96.4\%. We evaluate n-gram, LSTM, and Transformer (GPT-2 and Transformer-XL) LMs by observing whether they assign a higher probability to the acceptable sentence in each minimal pair. We find that state-of-the-art models identify morphological contrasts related to agreement reliably, but they struggle with some subtle semantic and syntactic phenomena, such as negative polarity items and extraction islands. } } ``` #### Errata Some results were misreported in the published TACL version. Please refer to the corrected version on arXiv: https://arxiv.org/abs/1912.00582 ### Contributions Thanks to [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
nyu-mll/blimp
[ "task_categories:text-classification", "task_ids:acceptability-classification", "annotations_creators:crowdsourced", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:1912.00582", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["acceptability-classification"], "paperswithcode_id": "blimp", "pretty_name": "BLiMP", "dataset_info": [{"config_name": "adjunct_island", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 165894, "num_examples": 1000}], "download_size": 62231, "dataset_size": 165894}, {"config_name": "anaphor_gender_agreement", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 130918, "num_examples": 1000}], "download_size": 39201, "dataset_size": 130918}, {"config_name": "anaphor_number_agreement", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 139879, "num_examples": 1000}], "download_size": 41547, "dataset_size": 139879}, {"config_name": "animate_subject_passive", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 144423, "num_examples": 1000}], "download_size": 47282, "dataset_size": 144423}, {"config_name": "animate_subject_trans", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 127798, "num_examples": 1000}], "download_size": 49651, "dataset_size": 127798}, {"config_name": "causative", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 122772, "num_examples": 1000}], "download_size": 48963, "dataset_size": 122772}, {"config_name": "complex_NP_island", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 198972, "num_examples": 1000}], "download_size": 78211, "dataset_size": 198972}, {"config_name": "coordinate_structure_constraint_complex_left_branch", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 210912, "num_examples": 1000}], "download_size": 67908, "dataset_size": 210912}, {"config_name": "coordinate_structure_constraint_object_extraction", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 171655, "num_examples": 1000}], "download_size": 51584, "dataset_size": 171655}, {"config_name": "determiner_noun_agreement_1", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 156120, "num_examples": 1000}], "download_size": 49893, "dataset_size": 156120}, {"config_name": "determiner_noun_agreement_2", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 156204, "num_examples": 1000}], "download_size": 49527, "dataset_size": 156204}, {"config_name": "determiner_noun_agreement_irregular_1", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 164473, "num_examples": 1000}], "download_size": 47274, "dataset_size": 164473}, {"config_name": "determiner_noun_agreement_irregular_2", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 161074, "num_examples": 1000}], "download_size": 47422, "dataset_size": 161074}, {"config_name": "determiner_noun_agreement_with_adj_2", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 179666, "num_examples": 1000}], "download_size": 56346, "dataset_size": 179666}, {"config_name": "determiner_noun_agreement_with_adj_irregular_1", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 184529, "num_examples": 1000}], "download_size": 54405, "dataset_size": 184529}, {"config_name": "determiner_noun_agreement_with_adj_irregular_2", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 184396, "num_examples": 1000}], "download_size": 54064, "dataset_size": 184396}, {"config_name": "determiner_noun_agreement_with_adjective_1", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 185126, "num_examples": 1000}], "download_size": 55682, "dataset_size": 185126}, {"config_name": "distractor_agreement_relational_noun", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 191473, "num_examples": 1000}], "download_size": 59641, "dataset_size": 191473}, {"config_name": "distractor_agreement_relative_clause", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 216756, "num_examples": 1000}], "download_size": 77897, "dataset_size": 216756}, {"config_name": "drop_argument", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 109806, "num_examples": 1000}], "download_size": 39961, "dataset_size": 109806}, {"config_name": "ellipsis_n_bar_1", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 217590, "num_examples": 1000}], "download_size": 92776, "dataset_size": 217590}, {"config_name": "ellipsis_n_bar_2", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 233161, "num_examples": 1000}], "download_size": 98882, "dataset_size": 233161}, {"config_name": "existential_there_object_raising", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 223741, "num_examples": 1000}], "download_size": 76641, "dataset_size": 223741}, {"config_name": "existential_there_quantifiers_1", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 162931, "num_examples": 1000}], "download_size": 51576, "dataset_size": 162931}, {"config_name": "existential_there_quantifiers_2", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 164826, "num_examples": 1000}], "download_size": 52092, "dataset_size": 164826}, {"config_name": "existential_there_subject_raising", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 200063, "num_examples": 1000}], "download_size": 59519, "dataset_size": 200063}, {"config_name": "expletive_it_object_raising", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 238615, "num_examples": 1000}], "download_size": 88607, "dataset_size": 238615}, {"config_name": "inchoative", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 104319, "num_examples": 1000}], "download_size": 39842, "dataset_size": 104319}, {"config_name": "intransitive", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 111097, "num_examples": 1000}], "download_size": 42387, "dataset_size": 111097}, {"config_name": "irregular_past_participle_adjectives", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 144661, "num_examples": 1000}], "download_size": 36654, "dataset_size": 144661}, {"config_name": "irregular_past_participle_verbs", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 125692, "num_examples": 1000}], "download_size": 37297, "dataset_size": 125692}, {"config_name": "irregular_plural_subject_verb_agreement_1", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 165584, "num_examples": 1000}], "download_size": 50725, "dataset_size": 165584}, {"config_name": "irregular_plural_subject_verb_agreement_2", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 153843, "num_examples": 1000}], "download_size": 42707, "dataset_size": 153843}, {"config_name": "left_branch_island_echo_question", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 147840, "num_examples": 1000}], "download_size": 50481, "dataset_size": 147840}, {"config_name": "left_branch_island_simple_question", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 150060, "num_examples": 1000}], "download_size": 50293, "dataset_size": 150060}, {"config_name": "matrix_question_npi_licensor_present", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 153262, "num_examples": 1000}], "download_size": 51899, "dataset_size": 153262}, {"config_name": "npi_present_1", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 138465, "num_examples": 1000}], "download_size": 51981, "dataset_size": 138465}, {"config_name": "npi_present_2", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 127636, "num_examples": 1000}], "download_size": 51661, "dataset_size": 127636}, {"config_name": "only_npi_licensor_present", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 148516, "num_examples": 1000}], "download_size": 51361, "dataset_size": 148516}, {"config_name": "only_npi_scope", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 208902, "num_examples": 1000}], "download_size": 84970, "dataset_size": 208902}, {"config_name": "passive_1", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 145882, "num_examples": 1000}], "download_size": 53931, "dataset_size": 145882}, {"config_name": "passive_2", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 113960, "num_examples": 1000}], "download_size": 40499, "dataset_size": 113960}, {"config_name": "principle_A_c_command", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 188490, "num_examples": 1000}], "download_size": 67867, "dataset_size": 188490}, {"config_name": "principle_A_case_1", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 170398, "num_examples": 1000}], "download_size": 61092, "dataset_size": 170398}, {"config_name": "principle_A_case_2", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 170412, "num_examples": 1000}], "download_size": 56430, "dataset_size": 170412}, {"config_name": "principle_A_domain_1", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 171170, "num_examples": 1000}], "download_size": 59120, "dataset_size": 171170}, {"config_name": "principle_A_domain_2", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 165333, "num_examples": 1000}], "download_size": 58464, "dataset_size": 165333}, {"config_name": "principle_A_domain_3", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 158998, "num_examples": 1000}], "download_size": 52859, "dataset_size": 158998}, {"config_name": "principle_A_reconstruction", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 152104, "num_examples": 1000}], "download_size": 44480, "dataset_size": 152104}, {"config_name": "regular_plural_subject_verb_agreement_1", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 158819, "num_examples": 1000}], "download_size": 49466, "dataset_size": 158819}, {"config_name": "regular_plural_subject_verb_agreement_2", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 153609, "num_examples": 1000}], "download_size": 43365, "dataset_size": 153609}, {"config_name": "sentential_negation_npi_licensor_present", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 171864, "num_examples": 1000}], "download_size": 54830, "dataset_size": 171864}, {"config_name": "sentential_negation_npi_scope", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 232098, "num_examples": 1000}], "download_size": 90157, "dataset_size": 232098}, {"config_name": "sentential_subject_island", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 172432, "num_examples": 1000}], "download_size": 56666, "dataset_size": 172432}, {"config_name": "superlative_quantifiers_1", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 159290, "num_examples": 1000}], "download_size": 48453, "dataset_size": 159290}, {"config_name": "superlative_quantifiers_2", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 159340, "num_examples": 1000}], "download_size": 50480, "dataset_size": 159340}, {"config_name": "tough_vs_raising_1", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 148636, "num_examples": 1000}], "download_size": 44779, "dataset_size": 148636}, {"config_name": "tough_vs_raising_2", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 169684, "num_examples": 1000}], "download_size": 61465, "dataset_size": 169684}, {"config_name": "transitive", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 133104, "num_examples": 1000}], "download_size": 55090, "dataset_size": 133104}, {"config_name": "wh_island", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 142340, "num_examples": 1000}], "download_size": 52808, "dataset_size": 142340}, {"config_name": "wh_questions_object_gap", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 193045, "num_examples": 1000}], "download_size": 70049, "dataset_size": 193045}, {"config_name": "wh_questions_subject_gap", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 195593, "num_examples": 1000}], "download_size": 71632, "dataset_size": 195593}, {"config_name": "wh_questions_subject_gap_long_distance", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 268270, "num_examples": 1000}], "download_size": 98913, "dataset_size": 268270}, {"config_name": "wh_vs_that_no_gap", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 188872, "num_examples": 1000}], "download_size": 71710, "dataset_size": 188872}, {"config_name": "wh_vs_that_no_gap_long_distance", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 247039, "num_examples": 1000}], "download_size": 95504, "dataset_size": 247039}, {"config_name": "wh_vs_that_with_gap", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 173386, "num_examples": 1000}], "download_size": 60291, "dataset_size": 173386}, {"config_name": "wh_vs_that_with_gap_long_distance", "features": [{"name": "sentence_good", "dtype": "string"}, {"name": "sentence_bad", "dtype": "string"}, {"name": "field", "dtype": "string"}, {"name": "linguistics_term", "dtype": "string"}, {"name": "UID", "dtype": "string"}, {"name": "simple_LM_method", "dtype": "bool"}, {"name": "one_prefix_method", "dtype": "bool"}, {"name": "two_prefix_method", "dtype": "bool"}, {"name": "lexically_identical", "dtype": "bool"}, {"name": "pair_id", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 231595, "num_examples": 1000}], "download_size": 84147, "dataset_size": 231595}], "configs": [{"config_name": "adjunct_island", "data_files": [{"split": "train", "path": "adjunct_island/train-*"}]}, {"config_name": "anaphor_gender_agreement", "data_files": [{"split": "train", "path": "anaphor_gender_agreement/train-*"}]}, {"config_name": "anaphor_number_agreement", "data_files": [{"split": "train", "path": "anaphor_number_agreement/train-*"}]}, {"config_name": "animate_subject_passive", "data_files": [{"split": "train", "path": "animate_subject_passive/train-*"}]}, {"config_name": "animate_subject_trans", "data_files": [{"split": "train", "path": "animate_subject_trans/train-*"}]}, {"config_name": "causative", "data_files": [{"split": "train", "path": "causative/train-*"}]}, {"config_name": "complex_NP_island", "data_files": [{"split": "train", "path": "complex_NP_island/train-*"}]}, {"config_name": "coordinate_structure_constraint_complex_left_branch", "data_files": [{"split": "train", "path": "coordinate_structure_constraint_complex_left_branch/train-*"}]}, {"config_name": "coordinate_structure_constraint_object_extraction", "data_files": [{"split": "train", "path": "coordinate_structure_constraint_object_extraction/train-*"}]}, {"config_name": "determiner_noun_agreement_1", "data_files": [{"split": "train", "path": "determiner_noun_agreement_1/train-*"}]}, {"config_name": "determiner_noun_agreement_2", "data_files": [{"split": "train", "path": "determiner_noun_agreement_2/train-*"}]}, {"config_name": "determiner_noun_agreement_irregular_1", "data_files": [{"split": "train", "path": "determiner_noun_agreement_irregular_1/train-*"}]}, {"config_name": "determiner_noun_agreement_irregular_2", "data_files": [{"split": "train", "path": "determiner_noun_agreement_irregular_2/train-*"}]}, {"config_name": "determiner_noun_agreement_with_adj_2", "data_files": [{"split": "train", "path": "determiner_noun_agreement_with_adj_2/train-*"}]}, {"config_name": "determiner_noun_agreement_with_adj_irregular_1", "data_files": [{"split": "train", "path": "determiner_noun_agreement_with_adj_irregular_1/train-*"}]}, {"config_name": "determiner_noun_agreement_with_adj_irregular_2", "data_files": [{"split": "train", "path": "determiner_noun_agreement_with_adj_irregular_2/train-*"}]}, {"config_name": "determiner_noun_agreement_with_adjective_1", "data_files": [{"split": "train", "path": "determiner_noun_agreement_with_adjective_1/train-*"}]}, {"config_name": "distractor_agreement_relational_noun", "data_files": [{"split": "train", "path": "distractor_agreement_relational_noun/train-*"}]}, {"config_name": "distractor_agreement_relative_clause", "data_files": [{"split": "train", "path": "distractor_agreement_relative_clause/train-*"}]}, {"config_name": "drop_argument", "data_files": [{"split": "train", "path": "drop_argument/train-*"}]}, {"config_name": "ellipsis_n_bar_1", "data_files": [{"split": "train", "path": "ellipsis_n_bar_1/train-*"}]}, {"config_name": "ellipsis_n_bar_2", "data_files": [{"split": "train", "path": "ellipsis_n_bar_2/train-*"}]}, {"config_name": "existential_there_object_raising", "data_files": [{"split": "train", "path": "existential_there_object_raising/train-*"}]}, {"config_name": "existential_there_quantifiers_1", "data_files": [{"split": "train", "path": "existential_there_quantifiers_1/train-*"}]}, {"config_name": "existential_there_quantifiers_2", "data_files": [{"split": "train", "path": "existential_there_quantifiers_2/train-*"}]}, {"config_name": "existential_there_subject_raising", "data_files": [{"split": "train", "path": "existential_there_subject_raising/train-*"}]}, {"config_name": "expletive_it_object_raising", "data_files": [{"split": "train", "path": "expletive_it_object_raising/train-*"}]}, {"config_name": "inchoative", "data_files": [{"split": "train", "path": "inchoative/train-*"}]}, {"config_name": "intransitive", "data_files": [{"split": "train", "path": "intransitive/train-*"}]}, {"config_name": "irregular_past_participle_adjectives", "data_files": [{"split": "train", "path": "irregular_past_participle_adjectives/train-*"}]}, {"config_name": "irregular_past_participle_verbs", "data_files": [{"split": "train", "path": "irregular_past_participle_verbs/train-*"}]}, {"config_name": "irregular_plural_subject_verb_agreement_1", "data_files": [{"split": "train", "path": "irregular_plural_subject_verb_agreement_1/train-*"}]}, {"config_name": "irregular_plural_subject_verb_agreement_2", "data_files": [{"split": "train", "path": "irregular_plural_subject_verb_agreement_2/train-*"}]}, {"config_name": "left_branch_island_echo_question", "data_files": [{"split": "train", "path": "left_branch_island_echo_question/train-*"}]}, {"config_name": "left_branch_island_simple_question", "data_files": [{"split": "train", "path": "left_branch_island_simple_question/train-*"}]}, {"config_name": "matrix_question_npi_licensor_present", "data_files": [{"split": "train", "path": "matrix_question_npi_licensor_present/train-*"}]}, {"config_name": "npi_present_1", "data_files": [{"split": "train", "path": "npi_present_1/train-*"}]}, {"config_name": "npi_present_2", "data_files": [{"split": "train", "path": "npi_present_2/train-*"}]}, {"config_name": "only_npi_licensor_present", "data_files": [{"split": "train", "path": "only_npi_licensor_present/train-*"}]}, {"config_name": "only_npi_scope", "data_files": [{"split": "train", "path": "only_npi_scope/train-*"}]}, {"config_name": "passive_1", "data_files": [{"split": "train", "path": "passive_1/train-*"}]}, {"config_name": "passive_2", "data_files": [{"split": "train", "path": "passive_2/train-*"}]}, {"config_name": "principle_A_c_command", "data_files": [{"split": "train", "path": "principle_A_c_command/train-*"}]}, {"config_name": "principle_A_case_1", "data_files": [{"split": "train", "path": "principle_A_case_1/train-*"}]}, {"config_name": "principle_A_case_2", "data_files": [{"split": "train", "path": "principle_A_case_2/train-*"}]}, {"config_name": "principle_A_domain_1", "data_files": [{"split": "train", "path": "principle_A_domain_1/train-*"}]}, {"config_name": "principle_A_domain_2", "data_files": [{"split": "train", "path": "principle_A_domain_2/train-*"}]}, {"config_name": "principle_A_domain_3", "data_files": [{"split": "train", "path": "principle_A_domain_3/train-*"}]}, {"config_name": "principle_A_reconstruction", "data_files": [{"split": "train", "path": "principle_A_reconstruction/train-*"}]}, {"config_name": "regular_plural_subject_verb_agreement_1", "data_files": [{"split": "train", "path": "regular_plural_subject_verb_agreement_1/train-*"}]}, {"config_name": "regular_plural_subject_verb_agreement_2", "data_files": [{"split": "train", "path": "regular_plural_subject_verb_agreement_2/train-*"}]}, {"config_name": "sentential_negation_npi_licensor_present", "data_files": [{"split": "train", "path": "sentential_negation_npi_licensor_present/train-*"}]}, {"config_name": "sentential_negation_npi_scope", "data_files": [{"split": "train", "path": "sentential_negation_npi_scope/train-*"}]}, {"config_name": "sentential_subject_island", "data_files": [{"split": "train", "path": "sentential_subject_island/train-*"}]}, {"config_name": "superlative_quantifiers_1", "data_files": [{"split": "train", "path": "superlative_quantifiers_1/train-*"}]}, {"config_name": "superlative_quantifiers_2", "data_files": [{"split": "train", "path": "superlative_quantifiers_2/train-*"}]}, {"config_name": "tough_vs_raising_1", "data_files": [{"split": "train", "path": "tough_vs_raising_1/train-*"}]}, {"config_name": "tough_vs_raising_2", "data_files": [{"split": "train", "path": "tough_vs_raising_2/train-*"}]}, {"config_name": "transitive", "data_files": [{"split": "train", "path": "transitive/train-*"}]}, {"config_name": "wh_island", "data_files": [{"split": "train", "path": "wh_island/train-*"}]}, {"config_name": "wh_questions_object_gap", "data_files": [{"split": "train", "path": "wh_questions_object_gap/train-*"}]}, {"config_name": "wh_questions_subject_gap", "data_files": [{"split": "train", "path": "wh_questions_subject_gap/train-*"}]}, {"config_name": "wh_questions_subject_gap_long_distance", "data_files": [{"split": "train", "path": "wh_questions_subject_gap_long_distance/train-*"}]}, {"config_name": "wh_vs_that_no_gap", "data_files": [{"split": "train", "path": "wh_vs_that_no_gap/train-*"}]}, {"config_name": "wh_vs_that_no_gap_long_distance", "data_files": [{"split": "train", "path": "wh_vs_that_no_gap_long_distance/train-*"}]}, {"config_name": "wh_vs_that_with_gap", "data_files": [{"split": "train", "path": "wh_vs_that_with_gap/train-*"}]}, {"config_name": "wh_vs_that_with_gap_long_distance", "data_files": [{"split": "train", "path": "wh_vs_that_with_gap_long_distance/train-*"}]}]}
2024-01-23T09:58:08+00:00
[ "1912.00582" ]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-acceptability-classification #annotations_creators-crowdsourced #language_creators-machine-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-1912.00582 #region-us
Dataset Card for "blimp" ======================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: * Repository: URL * Paper: BLiMP: The Benchmark of Linguistic Minimal Pairs for English * Paper: URL * Point of Contact: * Size of downloaded dataset files: 29.58 MB * Size of the generated dataset: 11.45 MB * Total amount of disk used: 41.03 MB ### Dataset Summary BLiMP is a challenge set for evaluating what language models (LMs) know about major grammatical phenomena in English. BLiMP consists of 67 sub-datasets, each containing 1000 minimal pairs isolating specific contrasts in syntax, morphology, or semantics. The data is automatically generated according to expert-crafted grammars. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### adjunct\_island * Size of downloaded dataset files: 0.36 MB * Size of the generated dataset: 0.17 MB * Total amount of disk used: 0.52 MB An example of 'train' looks as follows. #### anaphor\_gender\_agreement * Size of downloaded dataset files: 0.44 MB * Size of the generated dataset: 0.14 MB * Total amount of disk used: 0.57 MB An example of 'train' looks as follows. #### anaphor\_number\_agreement * Size of downloaded dataset files: 0.45 MB * Size of the generated dataset: 0.14 MB * Total amount of disk used: 0.59 MB An example of 'train' looks as follows. #### animate\_subject\_passive * Size of downloaded dataset files: 0.46 MB * Size of the generated dataset: 0.15 MB * Total amount of disk used: 0.61 MB An example of 'train' looks as follows. #### animate\_subject\_trans * Size of downloaded dataset files: 0.43 MB * Size of the generated dataset: 0.13 MB * Total amount of disk used: 0.57 MB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### adjunct\_island * 'sentence\_good': a 'string' feature. * 'sentence\_bad': a 'string' feature. * 'field': a 'string' feature. * 'linguistics\_term': a 'string' feature. * 'UID': a 'string' feature. * 'simple\_LM\_method': a 'bool' feature. * 'one\_prefix\_method': a 'bool' feature. * 'two\_prefix\_method': a 'bool' feature. * 'lexically\_identical': a 'bool' feature. * 'pair\_id': a 'int32' feature. #### anaphor\_gender\_agreement * 'sentence\_good': a 'string' feature. * 'sentence\_bad': a 'string' feature. * 'field': a 'string' feature. * 'linguistics\_term': a 'string' feature. * 'UID': a 'string' feature. * 'simple\_LM\_method': a 'bool' feature. * 'one\_prefix\_method': a 'bool' feature. * 'two\_prefix\_method': a 'bool' feature. * 'lexically\_identical': a 'bool' feature. * 'pair\_id': a 'int32' feature. #### anaphor\_number\_agreement * 'sentence\_good': a 'string' feature. * 'sentence\_bad': a 'string' feature. * 'field': a 'string' feature. * 'linguistics\_term': a 'string' feature. * 'UID': a 'string' feature. * 'simple\_LM\_method': a 'bool' feature. * 'one\_prefix\_method': a 'bool' feature. * 'two\_prefix\_method': a 'bool' feature. * 'lexically\_identical': a 'bool' feature. * 'pair\_id': a 'int32' feature. #### animate\_subject\_passive * 'sentence\_good': a 'string' feature. * 'sentence\_bad': a 'string' feature. * 'field': a 'string' feature. * 'linguistics\_term': a 'string' feature. * 'UID': a 'string' feature. * 'simple\_LM\_method': a 'bool' feature. * 'one\_prefix\_method': a 'bool' feature. * 'two\_prefix\_method': a 'bool' feature. * 'lexically\_identical': a 'bool' feature. * 'pair\_id': a 'int32' feature. #### animate\_subject\_trans * 'sentence\_good': a 'string' feature. * 'sentence\_bad': a 'string' feature. * 'field': a 'string' feature. * 'linguistics\_term': a 'string' feature. * 'UID': a 'string' feature. * 'simple\_LM\_method': a 'bool' feature. * 'one\_prefix\_method': a 'bool' feature. * 'two\_prefix\_method': a 'bool' feature. * 'lexically\_identical': a 'bool' feature. * 'pair\_id': a 'int32' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information BLiMP is distributed under a CC-BY license. Source: URL #### Errata Some results were misreported in the published TACL version. Please refer to the corrected version on arXiv: URL ### Contributions Thanks to @lhoestq, @patrickvonplaten, @thomwolf for adding this dataset.
[ "### Dataset Summary\n\n\nBLiMP is a challenge set for evaluating what language models (LMs) know about\nmajor grammatical phenomena in English. BLiMP consists of 67 sub-datasets, each\ncontaining 1000 minimal pairs isolating specific contrasts in syntax,\nmorphology, or semantics. The data is automatically generated according to\nexpert-crafted grammars.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### adjunct\\_island\n\n\n* Size of downloaded dataset files: 0.36 MB\n* Size of the generated dataset: 0.17 MB\n* Total amount of disk used: 0.52 MB\n\n\nAn example of 'train' looks as follows.", "#### anaphor\\_gender\\_agreement\n\n\n* Size of downloaded dataset files: 0.44 MB\n* Size of the generated dataset: 0.14 MB\n* Total amount of disk used: 0.57 MB\n\n\nAn example of 'train' looks as follows.", "#### anaphor\\_number\\_agreement\n\n\n* Size of downloaded dataset files: 0.45 MB\n* Size of the generated dataset: 0.14 MB\n* Total amount of disk used: 0.59 MB\n\n\nAn example of 'train' looks as follows.", "#### animate\\_subject\\_passive\n\n\n* Size of downloaded dataset files: 0.46 MB\n* Size of the generated dataset: 0.15 MB\n* Total amount of disk used: 0.61 MB\n\n\nAn example of 'train' looks as follows.", "#### animate\\_subject\\_trans\n\n\n* Size of downloaded dataset files: 0.43 MB\n* Size of the generated dataset: 0.13 MB\n* Total amount of disk used: 0.57 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### adjunct\\_island\n\n\n* 'sentence\\_good': a 'string' feature.\n* 'sentence\\_bad': a 'string' feature.\n* 'field': a 'string' feature.\n* 'linguistics\\_term': a 'string' feature.\n* 'UID': a 'string' feature.\n* 'simple\\_LM\\_method': a 'bool' feature.\n* 'one\\_prefix\\_method': a 'bool' feature.\n* 'two\\_prefix\\_method': a 'bool' feature.\n* 'lexically\\_identical': a 'bool' feature.\n* 'pair\\_id': a 'int32' feature.", "#### anaphor\\_gender\\_agreement\n\n\n* 'sentence\\_good': a 'string' feature.\n* 'sentence\\_bad': a 'string' feature.\n* 'field': a 'string' feature.\n* 'linguistics\\_term': a 'string' feature.\n* 'UID': a 'string' feature.\n* 'simple\\_LM\\_method': a 'bool' feature.\n* 'one\\_prefix\\_method': a 'bool' feature.\n* 'two\\_prefix\\_method': a 'bool' feature.\n* 'lexically\\_identical': a 'bool' feature.\n* 'pair\\_id': a 'int32' feature.", "#### anaphor\\_number\\_agreement\n\n\n* 'sentence\\_good': a 'string' feature.\n* 'sentence\\_bad': a 'string' feature.\n* 'field': a 'string' feature.\n* 'linguistics\\_term': a 'string' feature.\n* 'UID': a 'string' feature.\n* 'simple\\_LM\\_method': a 'bool' feature.\n* 'one\\_prefix\\_method': a 'bool' feature.\n* 'two\\_prefix\\_method': a 'bool' feature.\n* 'lexically\\_identical': a 'bool' feature.\n* 'pair\\_id': a 'int32' feature.", "#### animate\\_subject\\_passive\n\n\n* 'sentence\\_good': a 'string' feature.\n* 'sentence\\_bad': a 'string' feature.\n* 'field': a 'string' feature.\n* 'linguistics\\_term': a 'string' feature.\n* 'UID': a 'string' feature.\n* 'simple\\_LM\\_method': a 'bool' feature.\n* 'one\\_prefix\\_method': a 'bool' feature.\n* 'two\\_prefix\\_method': a 'bool' feature.\n* 'lexically\\_identical': a 'bool' feature.\n* 'pair\\_id': a 'int32' feature.", "#### animate\\_subject\\_trans\n\n\n* 'sentence\\_good': a 'string' feature.\n* 'sentence\\_bad': a 'string' feature.\n* 'field': a 'string' feature.\n* 'linguistics\\_term': a 'string' feature.\n* 'UID': a 'string' feature.\n* 'simple\\_LM\\_method': a 'bool' feature.\n* 'one\\_prefix\\_method': a 'bool' feature.\n* 'two\\_prefix\\_method': a 'bool' feature.\n* 'lexically\\_identical': a 'bool' feature.\n* 'pair\\_id': a 'int32' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nBLiMP is distributed under a CC-BY license. Source: URL", "#### Errata\n\n\nSome results were misreported in the published TACL version. Please refer to the corrected version on arXiv: URL", "### Contributions\n\n\nThanks to @lhoestq, @patrickvonplaten, @thomwolf for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-acceptability-classification #annotations_creators-crowdsourced #language_creators-machine-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-1912.00582 #region-us \n", "### Dataset Summary\n\n\nBLiMP is a challenge set for evaluating what language models (LMs) know about\nmajor grammatical phenomena in English. BLiMP consists of 67 sub-datasets, each\ncontaining 1000 minimal pairs isolating specific contrasts in syntax,\nmorphology, or semantics. The data is automatically generated according to\nexpert-crafted grammars.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### adjunct\\_island\n\n\n* Size of downloaded dataset files: 0.36 MB\n* Size of the generated dataset: 0.17 MB\n* Total amount of disk used: 0.52 MB\n\n\nAn example of 'train' looks as follows.", "#### anaphor\\_gender\\_agreement\n\n\n* Size of downloaded dataset files: 0.44 MB\n* Size of the generated dataset: 0.14 MB\n* Total amount of disk used: 0.57 MB\n\n\nAn example of 'train' looks as follows.", "#### anaphor\\_number\\_agreement\n\n\n* Size of downloaded dataset files: 0.45 MB\n* Size of the generated dataset: 0.14 MB\n* Total amount of disk used: 0.59 MB\n\n\nAn example of 'train' looks as follows.", "#### animate\\_subject\\_passive\n\n\n* Size of downloaded dataset files: 0.46 MB\n* Size of the generated dataset: 0.15 MB\n* Total amount of disk used: 0.61 MB\n\n\nAn example of 'train' looks as follows.", "#### animate\\_subject\\_trans\n\n\n* Size of downloaded dataset files: 0.43 MB\n* Size of the generated dataset: 0.13 MB\n* Total amount of disk used: 0.57 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### adjunct\\_island\n\n\n* 'sentence\\_good': a 'string' feature.\n* 'sentence\\_bad': a 'string' feature.\n* 'field': a 'string' feature.\n* 'linguistics\\_term': a 'string' feature.\n* 'UID': a 'string' feature.\n* 'simple\\_LM\\_method': a 'bool' feature.\n* 'one\\_prefix\\_method': a 'bool' feature.\n* 'two\\_prefix\\_method': a 'bool' feature.\n* 'lexically\\_identical': a 'bool' feature.\n* 'pair\\_id': a 'int32' feature.", "#### anaphor\\_gender\\_agreement\n\n\n* 'sentence\\_good': a 'string' feature.\n* 'sentence\\_bad': a 'string' feature.\n* 'field': a 'string' feature.\n* 'linguistics\\_term': a 'string' feature.\n* 'UID': a 'string' feature.\n* 'simple\\_LM\\_method': a 'bool' feature.\n* 'one\\_prefix\\_method': a 'bool' feature.\n* 'two\\_prefix\\_method': a 'bool' feature.\n* 'lexically\\_identical': a 'bool' feature.\n* 'pair\\_id': a 'int32' feature.", "#### anaphor\\_number\\_agreement\n\n\n* 'sentence\\_good': a 'string' feature.\n* 'sentence\\_bad': a 'string' feature.\n* 'field': a 'string' feature.\n* 'linguistics\\_term': a 'string' feature.\n* 'UID': a 'string' feature.\n* 'simple\\_LM\\_method': a 'bool' feature.\n* 'one\\_prefix\\_method': a 'bool' feature.\n* 'two\\_prefix\\_method': a 'bool' feature.\n* 'lexically\\_identical': a 'bool' feature.\n* 'pair\\_id': a 'int32' feature.", "#### animate\\_subject\\_passive\n\n\n* 'sentence\\_good': a 'string' feature.\n* 'sentence\\_bad': a 'string' feature.\n* 'field': a 'string' feature.\n* 'linguistics\\_term': a 'string' feature.\n* 'UID': a 'string' feature.\n* 'simple\\_LM\\_method': a 'bool' feature.\n* 'one\\_prefix\\_method': a 'bool' feature.\n* 'two\\_prefix\\_method': a 'bool' feature.\n* 'lexically\\_identical': a 'bool' feature.\n* 'pair\\_id': a 'int32' feature.", "#### animate\\_subject\\_trans\n\n\n* 'sentence\\_good': a 'string' feature.\n* 'sentence\\_bad': a 'string' feature.\n* 'field': a 'string' feature.\n* 'linguistics\\_term': a 'string' feature.\n* 'UID': a 'string' feature.\n* 'simple\\_LM\\_method': a 'bool' feature.\n* 'one\\_prefix\\_method': a 'bool' feature.\n* 'two\\_prefix\\_method': a 'bool' feature.\n* 'lexically\\_identical': a 'bool' feature.\n* 'pair\\_id': a 'int32' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nBLiMP is distributed under a CC-BY license. Source: URL", "#### Errata\n\n\nSome results were misreported in the published TACL version. Please refer to the corrected version on arXiv: URL", "### Contributions\n\n\nThanks to @lhoestq, @patrickvonplaten, @thomwolf for adding this dataset." ]
[ 102, 88, 10, 11, 6, 55, 60, 60, 60, 58, 17, 165, 169, 169, 168, 167, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 22, 30, 29 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-acceptability-classification #annotations_creators-crowdsourced #language_creators-machine-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-1912.00582 #region-us \n### Dataset Summary\n\n\nBLiMP is a challenge set for evaluating what language models (LMs) know about\nmajor grammatical phenomena in English. BLiMP consists of 67 sub-datasets, each\ncontaining 1000 minimal pairs isolating specific contrasts in syntax,\nmorphology, or semantics. The data is automatically generated according to\nexpert-crafted grammars.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### adjunct\\_island\n\n\n* Size of downloaded dataset files: 0.36 MB\n* Size of the generated dataset: 0.17 MB\n* Total amount of disk used: 0.52 MB\n\n\nAn example of 'train' looks as follows.#### anaphor\\_gender\\_agreement\n\n\n* Size of downloaded dataset files: 0.44 MB\n* Size of the generated dataset: 0.14 MB\n* Total amount of disk used: 0.57 MB\n\n\nAn example of 'train' looks as follows.#### anaphor\\_number\\_agreement\n\n\n* Size of downloaded dataset files: 0.45 MB\n* Size of the generated dataset: 0.14 MB\n* Total amount of disk used: 0.59 MB\n\n\nAn example of 'train' looks as follows.#### animate\\_subject\\_passive\n\n\n* Size of downloaded dataset files: 0.46 MB\n* Size of the generated dataset: 0.15 MB\n* Total amount of disk used: 0.61 MB\n\n\nAn example of 'train' looks as follows.", "passage: #### animate\\_subject\\_trans\n\n\n* Size of downloaded dataset files: 0.43 MB\n* Size of the generated dataset: 0.13 MB\n* Total amount of disk used: 0.57 MB\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### adjunct\\_island\n\n\n* 'sentence\\_good': a 'string' feature.\n* 'sentence\\_bad': a 'string' feature.\n* 'field': a 'string' feature.\n* 'linguistics\\_term': a 'string' feature.\n* 'UID': a 'string' feature.\n* 'simple\\_LM\\_method': a 'bool' feature.\n* 'one\\_prefix\\_method': a 'bool' feature.\n* 'two\\_prefix\\_method': a 'bool' feature.\n* 'lexically\\_identical': a 'bool' feature.\n* 'pair\\_id': a 'int32' feature.#### anaphor\\_gender\\_agreement\n\n\n* 'sentence\\_good': a 'string' feature.\n* 'sentence\\_bad': a 'string' feature.\n* 'field': a 'string' feature.\n* 'linguistics\\_term': a 'string' feature.\n* 'UID': a 'string' feature.\n* 'simple\\_LM\\_method': a 'bool' feature.\n* 'one\\_prefix\\_method': a 'bool' feature.\n* 'two\\_prefix\\_method': a 'bool' feature.\n* 'lexically\\_identical': a 'bool' feature.\n* 'pair\\_id': a 'int32' feature.", "passage: #### anaphor\\_number\\_agreement\n\n\n* 'sentence\\_good': a 'string' feature.\n* 'sentence\\_bad': a 'string' feature.\n* 'field': a 'string' feature.\n* 'linguistics\\_term': a 'string' feature.\n* 'UID': a 'string' feature.\n* 'simple\\_LM\\_method': a 'bool' feature.\n* 'one\\_prefix\\_method': a 'bool' feature.\n* 'two\\_prefix\\_method': a 'bool' feature.\n* 'lexically\\_identical': a 'bool' feature.\n* 'pair\\_id': a 'int32' feature.#### animate\\_subject\\_passive\n\n\n* 'sentence\\_good': a 'string' feature.\n* 'sentence\\_bad': a 'string' feature.\n* 'field': a 'string' feature.\n* 'linguistics\\_term': a 'string' feature.\n* 'UID': a 'string' feature.\n* 'simple\\_LM\\_method': a 'bool' feature.\n* 'one\\_prefix\\_method': a 'bool' feature.\n* 'two\\_prefix\\_method': a 'bool' feature.\n* 'lexically\\_identical': a 'bool' feature.\n* 'pair\\_id': a 'int32' feature.#### animate\\_subject\\_trans\n\n\n* 'sentence\\_good': a 'string' feature.\n* 'sentence\\_bad': a 'string' feature.\n* 'field': a 'string' feature.\n* 'linguistics\\_term': a 'string' feature.\n* 'UID': a 'string' feature.\n* 'simple\\_LM\\_method': a 'bool' feature.\n* 'one\\_prefix\\_method': a 'bool' feature.\n* 'two\\_prefix\\_method': a 'bool' feature.\n* 'lexically\\_identical': a 'bool' feature.\n* 'pair\\_id': a 'int32' feature.### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators### Licensing Information\n\n\nBLiMP is distributed under a CC-BY license. Source: URL#### Errata\n\n\nSome results were misreported in the published TACL version. Please refer to the corrected version on arXiv: URL" ]
728947f6c98ade87aa396004440cb3b58f173cb8
# Dataset Card for Blog Authorship Corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://u.cs.biu.ac.il/~koppel/BlogCorpus.htm](https://u.cs.biu.ac.il/~koppel/BlogCorpus.htm) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 312.95 MB - **Size of the generated dataset:** 647.76 MB - **Total amount of disk used:** 960.71 MB ### Dataset Summary The Blog Authorship Corpus consists of the collected posts of 19,320 bloggers gathered from blogger.com in August 2004. The corpus incorporates a total of 681,288 posts and over 140 million words - or approximately 35 posts and 7250 words per person. Each blog is presented as a separate file, the name of which indicates a blogger id# and the blogger’s self-provided gender, age, industry and astrological sign. (All are labeled for gender and age but for many, industry and/or sign is marked as unknown.) All bloggers included in the corpus fall into one of three age groups: - 8240 "10s" blogs (ages 13-17), - 8086 "20s" blogs (ages 23-27), - 2994 "30s" blogs (ages 33-47). For each age group there are an equal number of male and female bloggers. Each blog in the corpus includes at least 200 occurrences of common English words. All formatting has been stripped with two exceptions. Individual posts within a single blogger are separated by the date of the following post and links within a post are denoted by the label urllink. The corpus may be freely used for non-commercial research purposes. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages The language of the dataset is English (`en`). ## Dataset Structure ### Data Instances #### blog-authorship-corpus - **Size of downloaded dataset files:** 312.95 MB - **Size of the generated dataset:** 647.76 MB - **Total amount of disk used:** 960.71 MB An example of 'validation' looks as follows. ``` { "age": 23, "date": "27,July,2003", "gender": "female", "horoscope": "Scorpion", "job": "Student", "text": "This is a second test file." } ``` ### Data Fields The data fields are the same among all splits. #### blog-authorship-corpus - `text`: a `string` feature. - `date`: a `string` feature. - `gender`: a `string` feature. - `age`: a `int32` feature. - `horoscope`: a `string` feature. - `job`: a `string` feature. ### Data Splits | name |train |validation| |----------------------|-----:|---------:| |blog-authorship-corpus|532812| 31277| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information The corpus may be freely used for non-commercial research purposes. ### Citation Information ``` @inproceedings{schler2006effects, title={Effects of age and gender on blogging.}, author={Schler, Jonathan and Koppel, Moshe and Argamon, Shlomo and Pennebaker, James W}, booktitle={AAAI spring symposium: Computational approaches to analyzing weblogs}, volume={6}, pages={199--205}, year={2006} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
blog_authorship_corpus
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "paperswithcode_id": "blog-authorship-corpus", "pretty_name": "Blog Authorship Corpus", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "age", "dtype": "int32"}, {"name": "horoscope", "dtype": "string"}, {"name": "job", "dtype": "string"}], "config_name": "blog_authorship_corpus", "splits": [{"name": "train", "num_bytes": 753833081, "num_examples": 689793}, {"name": "validation", "num_bytes": 41236028, "num_examples": 37919}], "download_size": 632898892, "dataset_size": 795069109}}
2023-06-06T15:16:13+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #region-us
Dataset Card for Blog Authorship Corpus ======================================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: https://u.URL * Repository: * Paper: * Point of Contact: * Size of downloaded dataset files: 312.95 MB * Size of the generated dataset: 647.76 MB * Total amount of disk used: 960.71 MB ### Dataset Summary The Blog Authorship Corpus consists of the collected posts of 19,320 bloggers gathered from URL in August 2004. The corpus incorporates a total of 681,288 posts and over 140 million words - or approximately 35 posts and 7250 words per person. Each blog is presented as a separate file, the name of which indicates a blogger id# and the blogger’s self-provided gender, age, industry and astrological sign. (All are labeled for gender and age but for many, industry and/or sign is marked as unknown.) All bloggers included in the corpus fall into one of three age groups: * 8240 "10s" blogs (ages 13-17), * 8086 "20s" blogs (ages 23-27), * 2994 "30s" blogs (ages 33-47). For each age group there are an equal number of male and female bloggers. Each blog in the corpus includes at least 200 occurrences of common English words. All formatting has been stripped with two exceptions. Individual posts within a single blogger are separated by the date of the following post and links within a post are denoted by the label urllink. The corpus may be freely used for non-commercial research purposes. ### Supported Tasks and Leaderboards ### Languages The language of the dataset is English ('en'). Dataset Structure ----------------- ### Data Instances #### blog-authorship-corpus * Size of downloaded dataset files: 312.95 MB * Size of the generated dataset: 647.76 MB * Total amount of disk used: 960.71 MB An example of 'validation' looks as follows. ### Data Fields The data fields are the same among all splits. #### blog-authorship-corpus * 'text': a 'string' feature. * 'date': a 'string' feature. * 'gender': a 'string' feature. * 'age': a 'int32' feature. * 'horoscope': a 'string' feature. * 'job': a 'string' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information The corpus may be freely used for non-commercial research purposes. ### Contributions Thanks to @thomwolf, @lewtun, @patrickvonplaten for adding this dataset.
[ "### Dataset Summary\n\n\nThe Blog Authorship Corpus consists of the collected posts of 19,320 bloggers gathered from URL in August 2004. The corpus incorporates a total of 681,288 posts and over 140 million words - or approximately 35 posts and 7250 words per person.\n\n\nEach blog is presented as a separate file, the name of which indicates a blogger id# and the blogger’s self-provided gender, age, industry and astrological sign. (All are labeled for gender and age but for many, industry and/or sign is marked as unknown.)\n\n\nAll bloggers included in the corpus fall into one of three age groups:\n\n\n* 8240 \"10s\" blogs (ages 13-17),\n* 8086 \"20s\" blogs (ages 23-27),\n* 2994 \"30s\" blogs (ages 33-47).\n\n\nFor each age group there are an equal number of male and female bloggers.\n\n\nEach blog in the corpus includes at least 200 occurrences of common English words. All formatting has been stripped with two exceptions. Individual posts within a single blogger are separated by the date of the following post and links within a post are denoted by the label urllink.\n\n\nThe corpus may be freely used for non-commercial research purposes.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe language of the dataset is English ('en').\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### blog-authorship-corpus\n\n\n* Size of downloaded dataset files: 312.95 MB\n* Size of the generated dataset: 647.76 MB\n* Total amount of disk used: 960.71 MB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### blog-authorship-corpus\n\n\n* 'text': a 'string' feature.\n* 'date': a 'string' feature.\n* 'gender': a 'string' feature.\n* 'age': a 'int32' feature.\n* 'horoscope': a 'string' feature.\n* 'job': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe corpus may be freely used for non-commercial research purposes.", "### Contributions\n\n\nThanks to @thomwolf, @lewtun, @patrickvonplaten for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #region-us \n", "### Dataset Summary\n\n\nThe Blog Authorship Corpus consists of the collected posts of 19,320 bloggers gathered from URL in August 2004. The corpus incorporates a total of 681,288 posts and over 140 million words - or approximately 35 posts and 7250 words per person.\n\n\nEach blog is presented as a separate file, the name of which indicates a blogger id# and the blogger’s self-provided gender, age, industry and astrological sign. (All are labeled for gender and age but for many, industry and/or sign is marked as unknown.)\n\n\nAll bloggers included in the corpus fall into one of three age groups:\n\n\n* 8240 \"10s\" blogs (ages 13-17),\n* 8086 \"20s\" blogs (ages 23-27),\n* 2994 \"30s\" blogs (ages 33-47).\n\n\nFor each age group there are an equal number of male and female bloggers.\n\n\nEach blog in the corpus includes at least 200 occurrences of common English words. All formatting has been stripped with two exceptions. Individual posts within a single blogger are separated by the date of the following post and links within a post are denoted by the label urllink.\n\n\nThe corpus may be freely used for non-commercial research purposes.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe language of the dataset is English ('en').\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### blog-authorship-corpus\n\n\n* Size of downloaded dataset files: 312.95 MB\n* Size of the generated dataset: 647.76 MB\n* Total amount of disk used: 960.71 MB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### blog-authorship-corpus\n\n\n* 'text': a 'string' feature.\n* 'date': a 'string' feature.\n* 'gender': a 'string' feature.\n* 'age': a 'int32' feature.\n* 'horoscope': a 'string' feature.\n* 'job': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe corpus may be freely used for non-commercial research purposes.", "### Contributions\n\n\nThanks to @thomwolf, @lewtun, @patrickvonplaten for adding this dataset." ]
[ 89, 273, 10, 24, 6, 63, 17, 81, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 23, 28 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #region-us \n### Dataset Summary\n\n\nThe Blog Authorship Corpus consists of the collected posts of 19,320 bloggers gathered from URL in August 2004. The corpus incorporates a total of 681,288 posts and over 140 million words - or approximately 35 posts and 7250 words per person.\n\n\nEach blog is presented as a separate file, the name of which indicates a blogger id# and the blogger’s self-provided gender, age, industry and astrological sign. (All are labeled for gender and age but for many, industry and/or sign is marked as unknown.)\n\n\nAll bloggers included in the corpus fall into one of three age groups:\n\n\n* 8240 \"10s\" blogs (ages 13-17),\n* 8086 \"20s\" blogs (ages 23-27),\n* 2994 \"30s\" blogs (ages 33-47).\n\n\nFor each age group there are an equal number of male and female bloggers.\n\n\nEach blog in the corpus includes at least 200 occurrences of common English words. All formatting has been stripped with two exceptions. Individual posts within a single blogger are separated by the date of the following post and links within a post are denoted by the label urllink.\n\n\nThe corpus may be freely used for non-commercial research purposes.### Supported Tasks and Leaderboards### Languages\n\n\nThe language of the dataset is English ('en').\n\n\nDataset Structure\n-----------------### Data Instances#### blog-authorship-corpus\n\n\n* Size of downloaded dataset files: 312.95 MB\n* Size of the generated dataset: 647.76 MB\n* Total amount of disk used: 960.71 MB\n\n\nAn example of 'validation' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits." ]
99612296bc093f0720cac7d7cbfcb67eecf1ca2f
# Dataset Card for Bengali Hate Speech Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Bengali Hate Speech Dataset](https://github.com/rezacsedu/Bengali-Hate-Speech-Dataset) - **Repository:** [Bengali Hate Speech Dataset](https://github.com/rezacsedu/Bengali-Hate-Speech-Dataset) - **Paper:** [Classification Benchmarks for Under-resourced Bengali Language based on Multichannel Convolutional-LSTM Network](https://arxiv.org/abs/2004.07807) - **Point of Contact:** [Md. Rezaul Karim](rezaul.karim.fit@gmail.com) ### Dataset Summary The Bengali Hate Speech Dataset is a Bengali-language dataset of news articles collected from various Bengali media sources and categorized based on the type of hate in the text. The dataset was created to provide greater support for under-resourced languages like Bengali on NLP tasks, and serves as a benchmark for multiple types of classification tasks. ### Supported Tasks and Leaderboards * `topic classification`: The dataset can be used to train a Multichannel Convolutional-LSTM for classifying different types of hate speech. The model performance can be measured by its F1 score. ### Languages The text in the dataset is in Bengali and the associated BCP-47 code is `bn`. ## Dataset Structure ### Data Instances A data instance takes the form of a news article and its associated label. 🚨 Beware that the following example contains extremely offensive content! An example looks like this: ``` {"text": "রেন্ডিয়াকে পৃথীবির মানচিএ থেকে মুচে ফেলতে হবে", "label": "Geopolitical"} ``` ### Data Fields * `text`: the text of the Bengali news article * `label`: one of `Geopolitical`, `Personal`, `Political`, `Religious`, or `Gender abusive` indicating the type of hate speech ### Data Splits The dataset has 3418 examples. ## Dataset Creation ### Curation Rationale Under-resourced languages like Bengali lack supporting resources that languages like English have. This dataset was collected from multiple Bengali news sources to provide several classification benchmarks for hate speech detection, document classification and sentiment analysis. ### Source Data #### Initial Data Collection and Normalization Bengali articles were collected from a Bengali Wikipedia dump, Bengali news articles, news dumps of TV channels, books, blogs, sports portal and social media. Emphasis was placed on Facebook pages and newspaper sources because they have about 50 million followers and is a common source of opinion and hate speech. The full dataset consists of 250 million articles and is currently being prepared. This is a subset of the full dataset. #### Who are the source language producers? The source language producers are Bengali authors and users who interact with these various forms of Bengali media. ### Annotations #### Annotation process The data was annotated by manually identifying freqently occurring terms in texts containing hate speech and references to specific entities. The authors also prepared normalized frequency vectors of 175 abusive terms that are commonly used to express hate in Bengali. A hate label is assigned if at least one of these terms exists in the text. Annotator's were provided with unbiased text only contents to make the decision. Non-hate statements were removed from the list and the category of hate was further divided into political, personal, gender abusive, geopolitical and religious. To reduce possible bias, each label was assigned based on a majority voting on the annotator's opinions and Cohen's Kappa was computed to measure inter-annotator agreement. #### Who are the annotators? Three native Bengali speakers and two linguists annotated the dataset which was then reviewed and validated by three experts (one South Asian linguist and two native speakers). ### Personal and Sensitive Information The dataset contains very sensitive and highly offensive comments in a religious, political and gendered context. Some of the comments are directed towards contemporary public figures like politicians, religious leaders, celebrities and athletes. ## Considerations for Using the Data ### Social Impact of Dataset The purpose of the dataset is to improve hate speech detection in Bengali. The growth of social media has enabled people to express hate freely online and there has been a lot of focus on detecting hate speech for highly resourced languages like English. The use of hate speech is pervasive, like any other major language, which can have serious and deadly consequences. Failure to react to hate speech renders targeted minorities more vulnerable to attack and it can also create indifference towards their treatment from majority populations. ### Discussion of Biases The dataset was collected using a bootstrapping approach. An initial search was made for specific types of texts, articles and tweets containing common harassment directed at targeting characteristics. As a result, this dataset contains **extremely** offensive content that is disturbing. In addition, Facebook pages and newspaper sources were emphasized because they are well-known for having hate and harassment issues. ### Other Known Limitations The dataset contains racist, sexist, homophobic and offensive comments. It is collected and annotated for research related purposes only. ## Additional Information ### Dataset Curators The dataset was curated by Md. Rezaul Karim, Sumon Kanti Dey, Bharathi Raja Chakravarthi, John McCrae and Michael Cochez. ### Licensing Information This dataset is licensed under the MIT License. ### Citation Information ``` @inproceedings{karim2020BengaliNLP, title={Classification Benchmarks for Under-resourced Bengali Language based on Multichannel Convolutional-LSTM Network}, author={Karim, Md. Rezaul and Chakravarti, Bharathi Raja and P. McCrae, John and Cochez, Michael}, booktitle={7th IEEE International Conference on Data Science and Advanced Analytics (IEEE DSAA,2020)}, publisher={IEEE}, year={2020} } ``` ### Contributions Thanks to [@stevhliu](https://github.com/stevhliu) for adding this dataset.
bn_hate_speech
[ "task_categories:text-classification", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:bn", "license:mit", "hate-speech-topic-classification", "arxiv:2004.07807", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced", "expert-generated"], "language_creators": ["found"], "language": ["bn"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": [], "paperswithcode_id": "bengali-hate-speech", "pretty_name": "Bengali Hate Speech Dataset", "tags": ["hate-speech-topic-classification"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Personal", "1": "Political", "2": "Religious", "3": "Geopolitical", "4": "Gender abusive"}}}}], "splits": [{"name": "train", "num_bytes": 972631, "num_examples": 3418}], "download_size": 389814, "dataset_size": 972631}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-10T10:29:39+00:00
[ "2004.07807" ]
[ "bn" ]
TAGS #task_categories-text-classification #annotations_creators-crowdsourced #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Bengali #license-mit #hate-speech-topic-classification #arxiv-2004.07807 #region-us
# Dataset Card for Bengali Hate Speech Dataset ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: Bengali Hate Speech Dataset - Repository: Bengali Hate Speech Dataset - Paper: Classification Benchmarks for Under-resourced Bengali Language based on Multichannel Convolutional-LSTM Network - Point of Contact: Md. Rezaul Karim ### Dataset Summary The Bengali Hate Speech Dataset is a Bengali-language dataset of news articles collected from various Bengali media sources and categorized based on the type of hate in the text. The dataset was created to provide greater support for under-resourced languages like Bengali on NLP tasks, and serves as a benchmark for multiple types of classification tasks. ### Supported Tasks and Leaderboards * 'topic classification': The dataset can be used to train a Multichannel Convolutional-LSTM for classifying different types of hate speech. The model performance can be measured by its F1 score. ### Languages The text in the dataset is in Bengali and the associated BCP-47 code is 'bn'. ## Dataset Structure ### Data Instances A data instance takes the form of a news article and its associated label. Beware that the following example contains extremely offensive content! An example looks like this: ### Data Fields * 'text': the text of the Bengali news article * 'label': one of 'Geopolitical', 'Personal', 'Political', 'Religious', or 'Gender abusive' indicating the type of hate speech ### Data Splits The dataset has 3418 examples. ## Dataset Creation ### Curation Rationale Under-resourced languages like Bengali lack supporting resources that languages like English have. This dataset was collected from multiple Bengali news sources to provide several classification benchmarks for hate speech detection, document classification and sentiment analysis. ### Source Data #### Initial Data Collection and Normalization Bengali articles were collected from a Bengali Wikipedia dump, Bengali news articles, news dumps of TV channels, books, blogs, sports portal and social media. Emphasis was placed on Facebook pages and newspaper sources because they have about 50 million followers and is a common source of opinion and hate speech. The full dataset consists of 250 million articles and is currently being prepared. This is a subset of the full dataset. #### Who are the source language producers? The source language producers are Bengali authors and users who interact with these various forms of Bengali media. ### Annotations #### Annotation process The data was annotated by manually identifying freqently occurring terms in texts containing hate speech and references to specific entities. The authors also prepared normalized frequency vectors of 175 abusive terms that are commonly used to express hate in Bengali. A hate label is assigned if at least one of these terms exists in the text. Annotator's were provided with unbiased text only contents to make the decision. Non-hate statements were removed from the list and the category of hate was further divided into political, personal, gender abusive, geopolitical and religious. To reduce possible bias, each label was assigned based on a majority voting on the annotator's opinions and Cohen's Kappa was computed to measure inter-annotator agreement. #### Who are the annotators? Three native Bengali speakers and two linguists annotated the dataset which was then reviewed and validated by three experts (one South Asian linguist and two native speakers). ### Personal and Sensitive Information The dataset contains very sensitive and highly offensive comments in a religious, political and gendered context. Some of the comments are directed towards contemporary public figures like politicians, religious leaders, celebrities and athletes. ## Considerations for Using the Data ### Social Impact of Dataset The purpose of the dataset is to improve hate speech detection in Bengali. The growth of social media has enabled people to express hate freely online and there has been a lot of focus on detecting hate speech for highly resourced languages like English. The use of hate speech is pervasive, like any other major language, which can have serious and deadly consequences. Failure to react to hate speech renders targeted minorities more vulnerable to attack and it can also create indifference towards their treatment from majority populations. ### Discussion of Biases The dataset was collected using a bootstrapping approach. An initial search was made for specific types of texts, articles and tweets containing common harassment directed at targeting characteristics. As a result, this dataset contains extremely offensive content that is disturbing. In addition, Facebook pages and newspaper sources were emphasized because they are well-known for having hate and harassment issues. ### Other Known Limitations The dataset contains racist, sexist, homophobic and offensive comments. It is collected and annotated for research related purposes only. ## Additional Information ### Dataset Curators The dataset was curated by Md. Rezaul Karim, Sumon Kanti Dey, Bharathi Raja Chakravarthi, John McCrae and Michael Cochez. ### Licensing Information This dataset is licensed under the MIT License. ### Contributions Thanks to @stevhliu for adding this dataset.
[ "# Dataset Card for Bengali Hate Speech Dataset", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Bengali Hate Speech Dataset\n- Repository: Bengali Hate Speech Dataset\n- Paper: Classification Benchmarks for Under-resourced Bengali Language based on Multichannel Convolutional-LSTM Network\n- Point of Contact: Md. Rezaul Karim", "### Dataset Summary\n\nThe Bengali Hate Speech Dataset is a Bengali-language dataset of news articles collected from various Bengali media sources and categorized based on the type of hate in the text. The dataset was created to provide greater support for under-resourced languages like Bengali on NLP tasks, and serves as a benchmark for multiple types of classification tasks.", "### Supported Tasks and Leaderboards\n\n* 'topic classification': The dataset can be used to train a Multichannel Convolutional-LSTM for classifying different types of hate speech. The model performance can be measured by its F1 score.", "### Languages\n\nThe text in the dataset is in Bengali and the associated BCP-47 code is 'bn'.", "## Dataset Structure", "### Data Instances\n\nA data instance takes the form of a news article and its associated label. \n\n Beware that the following example contains extremely offensive content! \n\nAn example looks like this:", "### Data Fields\n\n* 'text': the text of the Bengali news article\n* 'label': one of 'Geopolitical', 'Personal', 'Political', 'Religious', or 'Gender abusive' indicating the type of hate speech", "### Data Splits\n\nThe dataset has 3418 examples.", "## Dataset Creation", "### Curation Rationale\n\nUnder-resourced languages like Bengali lack supporting resources that languages like English have. This dataset was collected from multiple Bengali news sources to provide several classification benchmarks for hate speech detection, document classification and sentiment analysis.", "### Source Data", "#### Initial Data Collection and Normalization\n\nBengali articles were collected from a Bengali Wikipedia dump, Bengali news articles, news dumps of TV channels, books, blogs, sports portal and social media. Emphasis was placed on Facebook pages and newspaper sources because they have about 50 million followers and is a common source of opinion and hate speech. The full dataset consists of 250 million articles and is currently being prepared. This is a subset of the full dataset.", "#### Who are the source language producers?\n\nThe source language producers are Bengali authors and users who interact with these various forms of Bengali media.", "### Annotations", "#### Annotation process\n\nThe data was annotated by manually identifying freqently occurring terms in texts containing hate speech and references to specific entities. The authors also prepared normalized frequency vectors of 175 abusive terms that are commonly used to express hate in Bengali. A hate label is assigned if at least one of these terms exists in the text. Annotator's were provided with unbiased text only contents to make the decision. Non-hate statements were removed from the list and the category of hate was further divided into political, personal, gender abusive, geopolitical and religious. To reduce possible bias, each label was assigned based on a majority voting on the annotator's opinions and Cohen's Kappa was computed to measure inter-annotator agreement.", "#### Who are the annotators?\n\nThree native Bengali speakers and two linguists annotated the dataset which was then reviewed and validated by three experts (one South Asian linguist and two native speakers).", "### Personal and Sensitive Information\n\nThe dataset contains very sensitive and highly offensive comments in a religious, political and gendered context. Some of the comments are directed towards contemporary public figures like politicians, religious leaders, celebrities and athletes.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe purpose of the dataset is to improve hate speech detection in Bengali. The growth of social media has enabled people to express hate freely online and there has been a lot of focus on detecting hate speech for highly resourced languages like English. The use of hate speech is pervasive, like any other major language, which can have serious and deadly consequences. Failure to react to hate speech renders targeted minorities more vulnerable to attack and it can also create indifference towards their treatment from majority populations.", "### Discussion of Biases\n\nThe dataset was collected using a bootstrapping approach. An initial search was made for specific types of texts, articles and tweets containing common harassment directed at targeting characteristics. As a result, this dataset contains extremely offensive content that is disturbing. In addition, Facebook pages and newspaper sources were emphasized because they are well-known for having hate and harassment issues.", "### Other Known Limitations\n\nThe dataset contains racist, sexist, homophobic and offensive comments. It is collected and annotated for research related purposes only.", "## Additional Information", "### Dataset Curators\n\nThe dataset was curated by Md. Rezaul Karim, Sumon Kanti Dey, Bharathi Raja Chakravarthi, John McCrae and Michael Cochez.", "### Licensing Information\n\nThis dataset is licensed under the MIT License.", "### Contributions\n\nThanks to @stevhliu for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #annotations_creators-crowdsourced #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Bengali #license-mit #hate-speech-topic-classification #arxiv-2004.07807 #region-us \n", "# Dataset Card for Bengali Hate Speech Dataset", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Bengali Hate Speech Dataset\n- Repository: Bengali Hate Speech Dataset\n- Paper: Classification Benchmarks for Under-resourced Bengali Language based on Multichannel Convolutional-LSTM Network\n- Point of Contact: Md. Rezaul Karim", "### Dataset Summary\n\nThe Bengali Hate Speech Dataset is a Bengali-language dataset of news articles collected from various Bengali media sources and categorized based on the type of hate in the text. The dataset was created to provide greater support for under-resourced languages like Bengali on NLP tasks, and serves as a benchmark for multiple types of classification tasks.", "### Supported Tasks and Leaderboards\n\n* 'topic classification': The dataset can be used to train a Multichannel Convolutional-LSTM for classifying different types of hate speech. The model performance can be measured by its F1 score.", "### Languages\n\nThe text in the dataset is in Bengali and the associated BCP-47 code is 'bn'.", "## Dataset Structure", "### Data Instances\n\nA data instance takes the form of a news article and its associated label. \n\n Beware that the following example contains extremely offensive content! \n\nAn example looks like this:", "### Data Fields\n\n* 'text': the text of the Bengali news article\n* 'label': one of 'Geopolitical', 'Personal', 'Political', 'Religious', or 'Gender abusive' indicating the type of hate speech", "### Data Splits\n\nThe dataset has 3418 examples.", "## Dataset Creation", "### Curation Rationale\n\nUnder-resourced languages like Bengali lack supporting resources that languages like English have. This dataset was collected from multiple Bengali news sources to provide several classification benchmarks for hate speech detection, document classification and sentiment analysis.", "### Source Data", "#### Initial Data Collection and Normalization\n\nBengali articles were collected from a Bengali Wikipedia dump, Bengali news articles, news dumps of TV channels, books, blogs, sports portal and social media. Emphasis was placed on Facebook pages and newspaper sources because they have about 50 million followers and is a common source of opinion and hate speech. The full dataset consists of 250 million articles and is currently being prepared. This is a subset of the full dataset.", "#### Who are the source language producers?\n\nThe source language producers are Bengali authors and users who interact with these various forms of Bengali media.", "### Annotations", "#### Annotation process\n\nThe data was annotated by manually identifying freqently occurring terms in texts containing hate speech and references to specific entities. The authors also prepared normalized frequency vectors of 175 abusive terms that are commonly used to express hate in Bengali. A hate label is assigned if at least one of these terms exists in the text. Annotator's were provided with unbiased text only contents to make the decision. Non-hate statements were removed from the list and the category of hate was further divided into political, personal, gender abusive, geopolitical and religious. To reduce possible bias, each label was assigned based on a majority voting on the annotator's opinions and Cohen's Kappa was computed to measure inter-annotator agreement.", "#### Who are the annotators?\n\nThree native Bengali speakers and two linguists annotated the dataset which was then reviewed and validated by three experts (one South Asian linguist and two native speakers).", "### Personal and Sensitive Information\n\nThe dataset contains very sensitive and highly offensive comments in a religious, political and gendered context. Some of the comments are directed towards contemporary public figures like politicians, religious leaders, celebrities and athletes.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe purpose of the dataset is to improve hate speech detection in Bengali. The growth of social media has enabled people to express hate freely online and there has been a lot of focus on detecting hate speech for highly resourced languages like English. The use of hate speech is pervasive, like any other major language, which can have serious and deadly consequences. Failure to react to hate speech renders targeted minorities more vulnerable to attack and it can also create indifference towards their treatment from majority populations.", "### Discussion of Biases\n\nThe dataset was collected using a bootstrapping approach. An initial search was made for specific types of texts, articles and tweets containing common harassment directed at targeting characteristics. As a result, this dataset contains extremely offensive content that is disturbing. In addition, Facebook pages and newspaper sources were emphasized because they are well-known for having hate and harassment issues.", "### Other Known Limitations\n\nThe dataset contains racist, sexist, homophobic and offensive comments. It is collected and annotated for research related purposes only.", "## Additional Information", "### Dataset Curators\n\nThe dataset was curated by Md. Rezaul Karim, Sumon Kanti Dey, Bharathi Raja Chakravarthi, John McCrae and Michael Cochez.", "### Licensing Information\n\nThis dataset is licensed under the MIT License.", "### Contributions\n\nThanks to @stevhliu for adding this dataset." ]
[ 108, 11, 120, 63, 83, 57, 25, 6, 40, 61, 14, 5, 58, 4, 100, 32, 5, 183, 49, 57, 8, 117, 100, 40, 5, 45, 17, 17 ]
[ "passage: TAGS\n#task_categories-text-classification #annotations_creators-crowdsourced #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Bengali #license-mit #hate-speech-topic-classification #arxiv-2004.07807 #region-us \n# Dataset Card for Bengali Hate Speech Dataset## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: Bengali Hate Speech Dataset\n- Repository: Bengali Hate Speech Dataset\n- Paper: Classification Benchmarks for Under-resourced Bengali Language based on Multichannel Convolutional-LSTM Network\n- Point of Contact: Md. Rezaul Karim### Dataset Summary\n\nThe Bengali Hate Speech Dataset is a Bengali-language dataset of news articles collected from various Bengali media sources and categorized based on the type of hate in the text. The dataset was created to provide greater support for under-resourced languages like Bengali on NLP tasks, and serves as a benchmark for multiple types of classification tasks.### Supported Tasks and Leaderboards\n\n* 'topic classification': The dataset can be used to train a Multichannel Convolutional-LSTM for classifying different types of hate speech. The model performance can be measured by its F1 score.### Languages\n\nThe text in the dataset is in Bengali and the associated BCP-47 code is 'bn'.## Dataset Structure", "passage: ### Data Instances\n\nA data instance takes the form of a news article and its associated label. \n\n Beware that the following example contains extremely offensive content! \n\nAn example looks like this:### Data Fields\n\n* 'text': the text of the Bengali news article\n* 'label': one of 'Geopolitical', 'Personal', 'Political', 'Religious', or 'Gender abusive' indicating the type of hate speech### Data Splits\n\nThe dataset has 3418 examples.## Dataset Creation### Curation Rationale\n\nUnder-resourced languages like Bengali lack supporting resources that languages like English have. This dataset was collected from multiple Bengali news sources to provide several classification benchmarks for hate speech detection, document classification and sentiment analysis.### Source Data#### Initial Data Collection and Normalization\n\nBengali articles were collected from a Bengali Wikipedia dump, Bengali news articles, news dumps of TV channels, books, blogs, sports portal and social media. Emphasis was placed on Facebook pages and newspaper sources because they have about 50 million followers and is a common source of opinion and hate speech. The full dataset consists of 250 million articles and is currently being prepared. This is a subset of the full dataset.#### Who are the source language producers?\n\nThe source language producers are Bengali authors and users who interact with these various forms of Bengali media.### Annotations#### Annotation process\n\nThe data was annotated by manually identifying freqently occurring terms in texts containing hate speech and references to specific entities. The authors also prepared normalized frequency vectors of 175 abusive terms that are commonly used to express hate in Bengali. A hate label is assigned if at least one of these terms exists in the text. Annotator's were provided with unbiased text only contents to make the decision. Non-hate statements were removed from the list and the category of hate was further divided into political, personal, gender abusive, geopolitical and religious. To reduce possible bias, each label was assigned based on a majority voting on the annotator's opinions and Cohen's Kappa was computed to measure inter-annotator agreement." ]
fd671e637acfbe911650fa398ec203f4205d128c
# Dataset Card for BnL Historical Newspapers ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://data.bnl.lu/data/historical-newspapers/ - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** opendata@bnl.etat.lu ### Dataset Summary The BnL has digitised over 800.000 pages of Luxembourg newspapers. This dataset currently has one configuration covering a subset of these newspapers, which sit under the "Processed Datasets" collection. The BNL: > processed all newspapers and monographs that are in the public domain and extracted the full text and associated meta data of every single article, section, advertisement… The result is a large number of small, easy to use XML files formatted using Dublin Core. [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure The dataset currently contains a single configuration. ### Data Instances An example instance from the datasets: ``` python {'id': 'https://persist.lu/ark:/70795/wx8r4c/articles/DTL47', 'article_type': 8, 'extent': 49, 'ispartof': 'Luxemburger Wort', 'pub_date': datetime.datetime(1853, 3, 23, 0, 0), 'publisher': 'Verl. der St-Paulus-Druckerei', 'source': 'newspaper/luxwort/1853-03-23', 'text': 'Asien. Eine neue Nedcrland-Post ist angekommen mil Nachrichten aus Calcutta bis zum 5. Febr.; Vom» vay, 12. Febr. ; Nangun und HongKong, 13. Jan. Die durch die letzte Post gebrachle Nachricht, der König von Ava sei durch seinen Bruder enlhronl worden, wird bestätigt. (K. Z.) Verantwortl. Herausgeber, F. Schümann.', 'title': 'Asien.', 'url': 'http://www.eluxemburgensia.lu/webclient/DeliveryManager?pid=209701#panel:pp|issue:209701|article:DTL47', 'language': 'de' } ``` ### Data Fields - 'id': This is a unique and persistent identifier using ARK. - 'article_type': The type of the exported data, possible values ('ADVERTISEMENT_SECTION', 'BIBLIOGRAPHY', 'CHAPTER', 'INDEX', 'CONTRIBUTION', 'TABLE_OF_CONTENTS', 'WEATHER', 'SHIPPING', 'SECTION', 'ARTICLE', 'TITLE_SECTION', 'DEATH_NOTICE', 'SUPPLEMENT', 'TABLE', 'ADVERTISEMENT', 'CHART_DIAGRAM', 'ILLUSTRATION', 'ISSUE') - 'extent': The number of words in the text field - 'ispartof: The complete title of the source document e.g. “Luxemburger Wort”. - 'pub_date': The publishing date of the document e.g “1848-12-15” - 'publisher':The publisher of the document e.g. “Verl. der St-Paulus-Druckerei”. - 'source': Describes the source of the document. For example <dc:source>newspaper/luxwort/1848-12-15</dc:source> means that this article comes from the newspaper “luxwort” (ID for Luxemburger Wort) issued on 15.12.1848. - 'text': The full text of the entire article, section, advertisement etc. It includes any titles and subtitles as well. The content does not contain layout information, such as headings, paragraphs or lines. - 'title': The main title of the article, section, advertisement, etc. - 'url': The link to the BnLViewer on eluxemburgensia.lu to view the resource online. - 'language': The language of the text, possible values ('ar', 'da', 'de', 'fi', 'fr', 'lb', 'nl', 'pt') ### Data Splits This dataset contains a single split `train`. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @misc{bnl_newspapers, title={Historical Newspapers}, url={https://data.bnl.lu/data/historical-newspapers/}, author={ Bibliothèque nationale du Luxembourg}, ``` ### Contributions Thanks to [@davanstrien](https://github.com/davanstrien) for adding this dataset.
bnl_newspapers
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:multilingual", "size_categories:100K<n<1M", "source_datasets:original", "language:ar", "language:da", "language:de", "language:fi", "language:fr", "language:lb", "language:nl", "language:pt", "license:cc0-1.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["ar", "da", "de", "fi", "fr", "lb", "nl", "pt"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "BnL Historical Newspapers", "dataset_info": {"config_name": "processed", "features": [{"name": "id", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "ispartof", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "pub_date", "dtype": "timestamp[s]"}, {"name": "publisher", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "article_type", "dtype": {"class_label": {"names": {"0": "ADVERTISEMENT_SECTION", "1": "BIBLIOGRAPHY", "2": "CHAPTER", "3": "INDEX", "4": "CONTRIBUTION", "5": "TABLE_OF_CONTENTS", "6": "WEATHER", "7": "SHIPPING", "8": "SECTION", "9": "ARTICLE", "10": "TITLE_SECTION", "11": "DEATH_NOTICE", "12": "SUPPLEMENT", "13": "TABLE", "14": "ADVERTISEMENT", "15": "CHART_DIAGRAM", "16": "ILLUSTRATION", "17": "ISSUE"}}}}, {"name": "extent", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 1611597178, "num_examples": 537558}], "download_size": 1033457256, "dataset_size": 1611597178}, "configs": [{"config_name": "processed", "data_files": [{"split": "train", "path": "processed/train-*"}], "default": true}]}
2024-01-24T16:24:00+00:00
[]
[ "ar", "da", "de", "fi", "fr", "lb", "nl", "pt" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-original #language-Arabic #language-Danish #language-German #language-Finnish #language-French #language-Luxembourgish #language-Dutch #language-Portuguese #license-cc0-1.0 #region-us
# Dataset Card for BnL Historical Newspapers ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Leaderboard: - Point of Contact: opendata@URL ### Dataset Summary The BnL has digitised over 800.000 pages of Luxembourg newspapers. This dataset currently has one configuration covering a subset of these newspapers, which sit under the "Processed Datasets" collection. The BNL: > processed all newspapers and monographs that are in the public domain and extracted the full text and associated meta data of every single article, section, advertisement… The result is a large number of small, easy to use XML files formatted using Dublin Core. ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure The dataset currently contains a single configuration. ### Data Instances An example instance from the datasets: ### Data Fields - 'id': This is a unique and persistent identifier using ARK. - 'article_type': The type of the exported data, possible values ('ADVERTISEMENT_SECTION', 'BIBLIOGRAPHY', 'CHAPTER', 'INDEX', 'CONTRIBUTION', 'TABLE_OF_CONTENTS', 'WEATHER', 'SHIPPING', 'SECTION', 'ARTICLE', 'TITLE_SECTION', 'DEATH_NOTICE', 'SUPPLEMENT', 'TABLE', 'ADVERTISEMENT', 'CHART_DIAGRAM', 'ILLUSTRATION', 'ISSUE') - 'extent': The number of words in the text field - 'ispartof: The complete title of the source document e.g. “Luxemburger Wort”. - 'pub_date': The publishing date of the document e.g “1848-12-15” - 'publisher':The publisher of the document e.g. “Verl. der St-Paulus-Druckerei”. - 'source': Describes the source of the document. For example <dc:source>newspaper/luxwort/1848-12-15</dc:source> means that this article comes from the newspaper “luxwort” (ID for Luxemburger Wort) issued on 15.12.1848. - 'text': The full text of the entire article, section, advertisement etc. It includes any titles and subtitles as well. The content does not contain layout information, such as headings, paragraphs or lines. - 'title': The main title of the article, section, advertisement, etc. - 'url': The link to the BnLViewer on URL to view the resource online. - 'language': The language of the text, possible values ('ar', 'da', 'de', 'fi', 'fr', 'lb', 'nl', 'pt') ### Data Splits This dataset contains a single split 'train'. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @davanstrien for adding this dataset.
[ "# Dataset Card for BnL Historical Newspapers", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL \n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact: opendata@URL", "### Dataset Summary\n\nThe BnL has digitised over 800.000 pages of Luxembourg newspapers. This dataset currently has one configuration covering a subset of these newspapers, which sit under the \"Processed Datasets\" collection. The BNL:\n\n> processed all newspapers and monographs that are in the public domain and extracted the full text and associated meta data of every single article, section, advertisement… The result is a large number of small, easy to use XML files formatted using Dublin Core.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure\n\nThe dataset currently contains a single configuration.", "### Data Instances\n\nAn example instance from the datasets:", "### Data Fields\n\n- 'id': This is a unique and persistent identifier using ARK. \n- 'article_type': The type of the exported data, possible values ('ADVERTISEMENT_SECTION',\n 'BIBLIOGRAPHY',\n 'CHAPTER',\n 'INDEX',\n 'CONTRIBUTION',\n 'TABLE_OF_CONTENTS',\n 'WEATHER',\n 'SHIPPING',\n 'SECTION',\n 'ARTICLE',\n 'TITLE_SECTION',\n 'DEATH_NOTICE',\n 'SUPPLEMENT',\n 'TABLE',\n 'ADVERTISEMENT',\n 'CHART_DIAGRAM',\n 'ILLUSTRATION',\n 'ISSUE')\n- 'extent': The number of words in the text field\n- 'ispartof: The complete title of the source document e.g. “Luxemburger Wort”.\n- 'pub_date': The publishing date of the document e.g “1848-12-15”\n- 'publisher':The publisher of the document e.g. “Verl. der St-Paulus-Druckerei”.\n- 'source': Describes the source of the document. For example\n<dc:source>newspaper/luxwort/1848-12-15</dc:source> means that this article comes from the newspaper “luxwort” (ID for Luxemburger Wort) issued on 15.12.1848.\n- 'text': The full text of the entire article, section, advertisement etc. It includes any titles and subtitles as well. The content does not contain layout information, such as headings, paragraphs or lines.\n- 'title': The main title of the article, section, advertisement, etc.\n- 'url': The link to the BnLViewer on URL to view the resource online.\n- 'language': The language of the text, possible values ('ar', 'da', 'de', 'fi', 'fr', 'lb', 'nl', 'pt')", "### Data Splits\n\nThis dataset contains a single split 'train'.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @davanstrien for adding this dataset." ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-original #language-Arabic #language-Danish #language-German #language-Finnish #language-French #language-Luxembourgish #language-Dutch #language-Portuguese #license-cc0-1.0 #region-us \n", "# Dataset Card for BnL Historical Newspapers", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL \n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact: opendata@URL", "### Dataset Summary\n\nThe BnL has digitised over 800.000 pages of Luxembourg newspapers. This dataset currently has one configuration covering a subset of these newspapers, which sit under the \"Processed Datasets\" collection. The BNL:\n\n> processed all newspapers and monographs that are in the public domain and extracted the full text and associated meta data of every single article, section, advertisement… The result is a large number of small, easy to use XML files formatted using Dublin Core.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure\n\nThe dataset currently contains a single configuration.", "### Data Instances\n\nAn example instance from the datasets:", "### Data Fields\n\n- 'id': This is a unique and persistent identifier using ARK. \n- 'article_type': The type of the exported data, possible values ('ADVERTISEMENT_SECTION',\n 'BIBLIOGRAPHY',\n 'CHAPTER',\n 'INDEX',\n 'CONTRIBUTION',\n 'TABLE_OF_CONTENTS',\n 'WEATHER',\n 'SHIPPING',\n 'SECTION',\n 'ARTICLE',\n 'TITLE_SECTION',\n 'DEATH_NOTICE',\n 'SUPPLEMENT',\n 'TABLE',\n 'ADVERTISEMENT',\n 'CHART_DIAGRAM',\n 'ILLUSTRATION',\n 'ISSUE')\n- 'extent': The number of words in the text field\n- 'ispartof: The complete title of the source document e.g. “Luxemburger Wort”.\n- 'pub_date': The publishing date of the document e.g “1848-12-15”\n- 'publisher':The publisher of the document e.g. “Verl. der St-Paulus-Druckerei”.\n- 'source': Describes the source of the document. For example\n<dc:source>newspaper/luxwort/1848-12-15</dc:source> means that this article comes from the newspaper “luxwort” (ID for Luxemburger Wort) issued on 15.12.1848.\n- 'text': The full text of the entire article, section, advertisement etc. It includes any titles and subtitles as well. The content does not contain layout information, such as headings, paragraphs or lines.\n- 'title': The main title of the article, section, advertisement, etc.\n- 'url': The link to the BnLViewer on URL to view the resource online.\n- 'language': The language of the text, possible values ('ar', 'da', 'de', 'fi', 'fr', 'lb', 'nl', 'pt')", "### Data Splits\n\nThis dataset contains a single split 'train'.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @davanstrien for adding this dataset." ]
[ 154, 12, 125, 29, 113, 10, 4, 16, 15, 464, 18, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 18 ]
[ "passage: TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-original #language-Arabic #language-Danish #language-German #language-Finnish #language-French #language-Luxembourgish #language-Dutch #language-Portuguese #license-cc0-1.0 #region-us \n# Dataset Card for BnL Historical Newspapers## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL \n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact: opendata@URL### Dataset Summary\n\nThe BnL has digitised over 800.000 pages of Luxembourg newspapers. This dataset currently has one configuration covering a subset of these newspapers, which sit under the \"Processed Datasets\" collection. The BNL:\n\n> processed all newspapers and monographs that are in the public domain and extracted the full text and associated meta data of every single article, section, advertisement… The result is a large number of small, easy to use XML files formatted using Dublin Core.### Supported Tasks and Leaderboards### Languages## Dataset Structure\n\nThe dataset currently contains a single configuration.### Data Instances\n\nAn example instance from the datasets:" ]
61048017048803fa18f6777fd55a40f2e70ef1e3
# Dataset Card for BookCorpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://yknzhu.wixsite.com/mbweb](https://yknzhu.wixsite.com/mbweb) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 1.18 GB - **Size of the generated dataset:** 4.85 GB - **Total amount of disk used:** 6.03 GB ### Dataset Summary Books are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story.This work aims to align books to their movie releases in order to providerich descriptive explanations for visual content that go semantically farbeyond the captions available in current datasets. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### plain_text - **Size of downloaded dataset files:** 1.18 GB - **Size of the generated dataset:** 4.85 GB - **Total amount of disk used:** 6.03 GB An example of 'train' looks as follows. ``` { "text": "But I traded all my life for some lovin' and some gold" } ``` ### Data Fields The data fields are the same among all splits. #### plain_text - `text`: a `string` feature. ### Data Splits | name | train | |----------|-------:| |plain_text|74004228| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information The books have been crawled from https://www.smashwords.com, see their [terms of service](https://www.smashwords.com/about/tos) for more information. A data sheet for this dataset has also been created and published in [Addressing "Documentation Debt" in Machine Learning Research: A Retrospective Datasheet for BookCorpus](https://arxiv.org/abs/2105.05241). ### Citation Information ``` @InProceedings{Zhu_2015_ICCV, title = {Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books}, author = {Zhu, Yukun and Kiros, Ryan and Zemel, Rich and Salakhutdinov, Ruslan and Urtasun, Raquel and Torralba, Antonio and Fidler, Sanja}, booktitle = {The IEEE International Conference on Computer Vision (ICCV)}, month = {December}, year = {2015} } ``` ### Contributions Thanks to [@lewtun](https://github.com/lewtun), [@richarddwang](https://github.com/richarddwang), [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
bookcorpus
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:10M<n<100M", "source_datasets:original", "language:en", "license:unknown", "arxiv:2105.05241", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "paperswithcode_id": "bookcorpus", "pretty_name": "BookCorpus", "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "config_name": "plain_text", "splits": [{"name": "train", "num_bytes": 4853859824, "num_examples": 74004228}], "download_size": 1179510242, "dataset_size": 4853859824}}
2024-01-18T11:02:03+00:00
[ "2105.05241" ]
[ "en" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-original #language-English #license-unknown #arxiv-2105.05241 #region-us
Dataset Card for BookCorpus =========================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: * Point of Contact: * Size of downloaded dataset files: 1.18 GB * Size of the generated dataset: 4.85 GB * Total amount of disk used: 6.03 GB ### Dataset Summary Books are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story.This work aims to align books to their movie releases in order to providerich descriptive explanations for visual content that go semantically farbeyond the captions available in current datasets. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### plain\_text * Size of downloaded dataset files: 1.18 GB * Size of the generated dataset: 4.85 GB * Total amount of disk used: 6.03 GB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### plain\_text * 'text': a 'string' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information The books have been crawled from URL, see their terms of service for more information. A data sheet for this dataset has also been created and published in Addressing "Documentation Debt" in Machine Learning Research: A Retrospective Datasheet for BookCorpus. ### Contributions Thanks to @lewtun, @richarddwang, @lhoestq, @thomwolf for adding this dataset.
[ "### Dataset Summary\n\n\nBooks are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story.This work aims to align books to their movie releases in order to providerich descriptive explanations for visual content that go semantically farbeyond the captions available in current datasets.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### plain\\_text\n\n\n* Size of downloaded dataset files: 1.18 GB\n* Size of the generated dataset: 4.85 GB\n* Total amount of disk used: 6.03 GB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### plain\\_text\n\n\n* 'text': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe books have been crawled from URL, see their terms of service for more information.\n\n\nA data sheet for this dataset has also been created and published in Addressing \"Documentation Debt\" in Machine Learning Research: A Retrospective Datasheet for BookCorpus.", "### Contributions\n\n\nThanks to @lewtun, @richarddwang, @lhoestq, @thomwolf for adding this dataset." ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-original #language-English #license-unknown #arxiv-2105.05241 #region-us \n", "### Dataset Summary\n\n\nBooks are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story.This work aims to align books to their movie releases in order to providerich descriptive explanations for visual content that go semantically farbeyond the captions available in current datasets.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### plain\\_text\n\n\n* Size of downloaded dataset files: 1.18 GB\n* Size of the generated dataset: 4.85 GB\n* Total amount of disk used: 6.03 GB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### plain\\_text\n\n\n* 'text': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe books have been crawled from URL, see their terms of service for more information.\n\n\nA data sheet for this dataset has also been created and published in Addressing \"Documentation Debt\" in Machine Learning Research: A Retrospective Datasheet for BookCorpus.", "### Contributions\n\n\nThanks to @lewtun, @richarddwang, @lhoestq, @thomwolf for adding this dataset." ]
[ 120, 100, 10, 11, 6, 52, 17, 17, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 63, 32 ]
[ "passage: TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-original #language-English #license-unknown #arxiv-2105.05241 #region-us \n### Dataset Summary\n\n\nBooks are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story.This work aims to align books to their movie releases in order to providerich descriptive explanations for visual content that go semantically farbeyond the captions available in current datasets.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### plain\\_text\n\n\n* Size of downloaded dataset files: 1.18 GB\n* Size of the generated dataset: 4.85 GB\n* Total amount of disk used: 6.03 GB\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### plain\\_text\n\n\n* 'text': a 'string' feature.### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators" ]
817f291474dcb4fa865ed7c8298e709cd8a20266
# Dataset Card for BookCorpusOpen ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/soskek/bookcorpus/issues/27](https://github.com/soskek/bookcorpus/issues/27) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 2.40 GB - **Size of the generated dataset:** 6.64 GB - **Total amount of disk used:** 9.05 GB ### Dataset Summary <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"> <p><b>Defunct:</b> Dataset "bookcorpusopen" is defunct and no longer accessible due to unavailability of the source data.</p> </div> Books are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story. This version of bookcorpus has 17868 dataset items (books). Each item contains two fields: title and text. The title is the name of the book (just the file name) while text contains unprocessed book text. The bookcorpus has been prepared by Shawn Presser and is generously hosted by The-Eye. The-Eye is a non-profit, community driven platform dedicated to the archiving and long-term preservation of any and all data including but by no means limited to... websites, books, games, software, video, audio, other digital-obscura and ideas. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### plain_text - **Size of downloaded dataset files:** 2.40 GB - **Size of the generated dataset:** 6.64 GB - **Total amount of disk used:** 9.05 GB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "text": "\"\\n\\nzONE\\n\\n## The end and the beginning\\n\\nby\\n\\nPhilip F. Blood\\n\\nSMASHWORDS EDITION\\n\\nVersion 3.55\\n\\nPUBLISHED BY:\\n\\nPhi...", "title": "zone-the-end-and-the-beginning.epub.txt" } ``` ### Data Fields The data fields are the same among all splits. #### plain_text - `title`: a `string` feature. - `text`: a `string` feature. ### Data Splits | name |train| |----------|----:| |plain_text|17868| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information The books have been crawled from smashwords.com, see their [terms of service](https://www.smashwords.com/about/tos) for more information. A data sheet for this dataset has also been created and published in [Addressing "Documentation Debt" in Machine Learning Research: A Retrospective Datasheet for BookCorpus](https://arxiv.org/abs/2105.05241) ### Citation Information ``` @InProceedings{Zhu_2015_ICCV, title = {Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books}, author = {Zhu, Yukun and Kiros, Ryan and Zemel, Rich and Salakhutdinov, Ruslan and Urtasun, Raquel and Torralba, Antonio and Fidler, Sanja}, booktitle = {The IEEE International Conference on Computer Vision (ICCV)}, month = {December}, year = {2015} } ``` ### Contributions Thanks to [@vblagoje](https://github.com/vblagoje) for adding this dataset.
bookcorpusopen
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "arxiv:2105.05241", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "paperswithcode_id": "bookcorpus", "pretty_name": "BookCorpusOpen", "dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "config_name": "plain_text", "splits": [{"name": "train", "num_bytes": 6643435392, "num_examples": 17868}], "download_size": 2404269430, "dataset_size": 6643435392}, "viewer": false}
2023-11-24T14:42:08+00:00
[ "2105.05241" ]
[ "en" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #arxiv-2105.05241 #region-us
Dataset Card for BookCorpusOpen =============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: * Point of Contact: * Size of downloaded dataset files: 2.40 GB * Size of the generated dataset: 6.64 GB * Total amount of disk used: 9.05 GB ### Dataset Summary **Defunct:** Dataset "bookcorpusopen" is defunct and no longer accessible due to unavailability of the source data. Books are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story. This version of bookcorpus has 17868 dataset items (books). Each item contains two fields: title and text. The title is the name of the book (just the file name) while text contains unprocessed book text. The bookcorpus has been prepared by Shawn Presser and is generously hosted by The-Eye. The-Eye is a non-profit, community driven platform dedicated to the archiving and long-term preservation of any and all data including but by no means limited to... websites, books, games, software, video, audio, other digital-obscura and ideas. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### plain\_text * Size of downloaded dataset files: 2.40 GB * Size of the generated dataset: 6.64 GB * Total amount of disk used: 9.05 GB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### plain\_text * 'title': a 'string' feature. * 'text': a 'string' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information The books have been crawled from URL, see their terms of service for more information. A data sheet for this dataset has also been created and published in Addressing "Documentation Debt" in Machine Learning Research: A Retrospective Datasheet for BookCorpus ### Contributions Thanks to @vblagoje for adding this dataset.
[ "### Dataset Summary\n\n\n\n**Defunct:** Dataset \"bookcorpusopen\" is defunct and no longer accessible due to unavailability of the source data.\n\n\n\nBooks are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story.\nThis version of bookcorpus has 17868 dataset items (books). Each item contains two fields: title and text. The title is the name of the book (just the file name) while text contains unprocessed book text. The bookcorpus has been prepared by Shawn Presser and is generously hosted by The-Eye. The-Eye is a non-profit, community driven platform dedicated to the archiving and long-term preservation of any and all data including but by no means limited to... websites, books, games, software, video, audio, other digital-obscura and ideas.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### plain\\_text\n\n\n* Size of downloaded dataset files: 2.40 GB\n* Size of the generated dataset: 6.64 GB\n* Total amount of disk used: 9.05 GB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### plain\\_text\n\n\n* 'title': a 'string' feature.\n* 'text': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe books have been crawled from URL, see their terms of service for more information.\n\n\nA data sheet for this dataset has also been created and published in Addressing \"Documentation Debt\" in Machine Learning Research: A Retrospective Datasheet for BookCorpus", "### Contributions\n\n\nThanks to @vblagoje for adding this dataset." ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #arxiv-2105.05241 #region-us \n", "### Dataset Summary\n\n\n\n**Defunct:** Dataset \"bookcorpusopen\" is defunct and no longer accessible due to unavailability of the source data.\n\n\n\nBooks are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story.\nThis version of bookcorpus has 17868 dataset items (books). Each item contains two fields: title and text. The title is the name of the book (just the file name) while text contains unprocessed book text. The bookcorpus has been prepared by Shawn Presser and is generously hosted by The-Eye. The-Eye is a non-profit, community driven platform dedicated to the archiving and long-term preservation of any and all data including but by no means limited to... websites, books, games, software, video, audio, other digital-obscura and ideas.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### plain\\_text\n\n\n* Size of downloaded dataset files: 2.40 GB\n* Size of the generated dataset: 6.64 GB\n* Total amount of disk used: 9.05 GB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### plain\\_text\n\n\n* 'title': a 'string' feature.\n* 'text': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe books have been crawled from URL, see their terms of service for more information.\n\n\nA data sheet for this dataset has also been created and published in Addressing \"Documentation Debt\" in Machine Learning Research: A Retrospective Datasheet for BookCorpus", "### Contributions\n\n\nThanks to @vblagoje for adding this dataset." ]
[ 120, 222, 10, 11, 6, 52, 17, 28, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 62, 18 ]
[ "passage: TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #arxiv-2105.05241 #region-us \n### Dataset Summary\n\n\n\n**Defunct:** Dataset \"bookcorpusopen\" is defunct and no longer accessible due to unavailability of the source data.\n\n\n\nBooks are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story.\nThis version of bookcorpus has 17868 dataset items (books). Each item contains two fields: title and text. The title is the name of the book (just the file name) while text contains unprocessed book text. The bookcorpus has been prepared by Shawn Presser and is generously hosted by The-Eye. The-Eye is a non-profit, community driven platform dedicated to the archiving and long-term preservation of any and all data including but by no means limited to... websites, books, games, software, video, audio, other digital-obscura and ideas.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### plain\\_text\n\n\n* Size of downloaded dataset files: 2.40 GB\n* Size of the generated dataset: 6.64 GB\n* Total amount of disk used: 9.05 GB\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### plain\\_text\n\n\n* 'title': a 'string' feature.\n* 'text': a 'string' feature.### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization" ]
35b264d03638db9f4ce671b711558bf7ff0f80d5
# Dataset Card for Boolq ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Repository:** https://github.com/google-research-datasets/boolean-questions - **Paper:** https://arxiv.org/abs/1905.10044 - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 8.77 MB - **Size of the generated dataset:** 7.83 MB - **Total amount of disk used:** 16.59 MB ### Dataset Summary BoolQ is a question answering dataset for yes/no questions containing 15942 examples. These questions are naturally occurring ---they are generated in unprompted and unconstrained settings. Each example is a triplet of (question, passage, answer), with the title of the page as optional additional context. The text-pair classification setup is similar to existing natural language inference tasks. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 8.77 MB - **Size of the generated dataset:** 7.83 MB - **Total amount of disk used:** 16.59 MB An example of 'validation' looks as follows. ``` This example was too long and was cropped: { "answer": false, "passage": "\"All biomass goes through at least some of these steps: it needs to be grown, collected, dried, fermented, distilled, and burned...", "question": "does ethanol take more energy make that produces" } ``` ### Data Fields The data fields are the same among all splits. #### default - `question`: a `string` feature. - `answer`: a `bool` feature. - `passage`: a `string` feature. ### Data Splits | name |train|validation| |-------|----:|---------:| |default| 9427| 3270| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information BoolQ is released under the [Creative Commons Share-Alike 3.0](https://creativecommons.org/licenses/by-sa/3.0/) license. ### Citation Information ``` @inproceedings{clark2019boolq, title = {BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions}, author = {Clark, Christopher and Lee, Kenton and Chang, Ming-Wei, and Kwiatkowski, Tom and Collins, Michael, and Toutanova, Kristina}, booktitle = {NAACL}, year = {2019}, } ``` ### Contributions Thanks to [@lewtun](https://github.com/lewtun), [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
google/boolq
[ "task_categories:text-classification", "task_ids:natural-language-inference", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-sa-3.0", "arxiv:1905.10044", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-sa-3.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["natural-language-inference"], "paperswithcode_id": "boolq", "pretty_name": "BoolQ", "dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "bool"}, {"name": "passage", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5829584, "num_examples": 9427}, {"name": "validation", "num_bytes": 1998182, "num_examples": 3270}], "download_size": 4942776, "dataset_size": 7827766}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}]}
2024-01-22T09:16:26+00:00
[ "1905.10044" ]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-natural-language-inference #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-3.0 #arxiv-1905.10044 #region-us
Dataset Card for Boolq ====================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: * Repository: URL * Paper: URL * Point of Contact: * Size of downloaded dataset files: 8.77 MB * Size of the generated dataset: 7.83 MB * Total amount of disk used: 16.59 MB ### Dataset Summary BoolQ is a question answering dataset for yes/no questions containing 15942 examples. These questions are naturally occurring ---they are generated in unprompted and unconstrained settings. Each example is a triplet of (question, passage, answer), with the title of the page as optional additional context. The text-pair classification setup is similar to existing natural language inference tasks. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### default * Size of downloaded dataset files: 8.77 MB * Size of the generated dataset: 7.83 MB * Total amount of disk used: 16.59 MB An example of 'validation' looks as follows. ### Data Fields The data fields are the same among all splits. #### default * 'question': a 'string' feature. * 'answer': a 'bool' feature. * 'passage': a 'string' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information BoolQ is released under the Creative Commons Share-Alike 3.0 license. ### Contributions Thanks to @lewtun, @lhoestq, @thomwolf, @patrickvonplaten, @albertvillanova for adding this dataset.
[ "### Dataset Summary\n\n\nBoolQ is a question answering dataset for yes/no questions containing 15942 examples. These questions are naturally\noccurring ---they are generated in unprompted and unconstrained settings.\nEach example is a triplet of (question, passage, answer), with the title of the page as optional additional context.\nThe text-pair classification setup is similar to existing natural language inference tasks.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 8.77 MB\n* Size of the generated dataset: 7.83 MB\n* Total amount of disk used: 16.59 MB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'question': a 'string' feature.\n* 'answer': a 'bool' feature.\n* 'passage': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nBoolQ is released under the Creative Commons Share-Alike 3.0 license.", "### Contributions\n\n\nThanks to @lewtun, @lhoestq, @thomwolf, @patrickvonplaten, @albertvillanova for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-natural-language-inference #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-3.0 #arxiv-1905.10044 #region-us \n", "### Dataset Summary\n\n\nBoolQ is a question answering dataset for yes/no questions containing 15942 examples. These questions are naturally\noccurring ---they are generated in unprompted and unconstrained settings.\nEach example is a triplet of (question, passage, answer), with the title of the page as optional additional context.\nThe text-pair classification setup is similar to existing natural language inference tasks.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 8.77 MB\n* Size of the generated dataset: 7.83 MB\n* Total amount of disk used: 16.59 MB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'question': a 'string' feature.\n* 'answer': a 'bool' feature.\n* 'passage': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nBoolQ is released under the Creative Commons Share-Alike 3.0 license.", "### Contributions\n\n\nThanks to @lewtun, @lhoestq, @thomwolf, @patrickvonplaten, @albertvillanova for adding this dataset." ]
[ 103, 99, 10, 11, 6, 50, 17, 39, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 21, 39 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-natural-language-inference #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-3.0 #arxiv-1905.10044 #region-us \n### Dataset Summary\n\n\nBoolQ is a question answering dataset for yes/no questions containing 15942 examples. These questions are naturally\noccurring ---they are generated in unprompted and unconstrained settings.\nEach example is a triplet of (question, passage, answer), with the title of the page as optional additional context.\nThe text-pair classification setup is similar to existing natural language inference tasks.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### default\n\n\n* Size of downloaded dataset files: 8.77 MB\n* Size of the generated dataset: 7.83 MB\n* Total amount of disk used: 16.59 MB\n\n\nAn example of 'validation' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### default\n\n\n* 'question': a 'string' feature.\n* 'answer': a 'bool' feature.\n* 'passage': a 'string' feature.### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators### Licensing Information\n\n\nBoolQ is released under the Creative Commons Share-Alike 3.0 license." ]
45f1ac8242a87d96645e04bd6c1c645c85bf61ed
# Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [bprec homepage](https://clarin-pl.eu/dspace/handle/11321/736) - **Repository:** [bprec repository](https://gitlab.clarin-pl.eu/team-semantics/semrel-extraction) - **Paper:** [bprec paper](https://www.aclweb.org/anthology/2020.lrec-1.233.pdf) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Brand-Product Relation Extraction Corpora in Polish ### Supported Tasks and Leaderboards NER, Entity linking ### Languages Polish ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - id: int identifier of a text - text: string text, for example a consumer comment on the social media - ner: extracted entities and their relationship - source and target: a pair of entities identified in the text - from: int value representing starting character of the entity - text: string value with the entity text - to: int value representing end character of the entity - type: one of pre-identified entity types: - PRODUCT_NAME - PRODUCT_NAME_IMP - PRODUCT_NO_BRAND - BRAND_NAME - BRAND_NAME_IMP - VERSION - PRODUCT_ADJ - BRAND_ADJ - LOCATION - LOCATION_IMP ### Data Splits No train/validation/test split provided. Current dataset configurations point to 4 domain categories for the texts: - tele - electro - cosmetics - banking ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{inproceedings, author = {Janz, Arkadiusz and Kopociński, Łukasz and Piasecki, Maciej and Pluwak, Agnieszka}, year = {2020}, month = {05}, pages = {}, title = {Brand-Product Relation Extraction Using Heterogeneous Vector Space Representations} } ``` ### Contributions Thanks to [@kldarek](https://github.com/kldarek) for adding this dataset.
bprec
[ "task_categories:text-retrieval", "task_ids:entity-linking-retrieval", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:pl", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["pl"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-retrieval"], "task_ids": ["entity-linking-retrieval"], "pretty_name": "bprec", "dataset_info": [{"config_name": "default", "features": [{"name": "id", "dtype": "int32"}, {"name": "text", "dtype": "string"}, {"name": "ner", "sequence": [{"name": "source", "struct": [{"name": "from", "dtype": "int32"}, {"name": "text", "dtype": "string"}, {"name": "to", "dtype": "int32"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "PRODUCT_NAME", "1": "PRODUCT_NAME_IMP", "2": "PRODUCT_NO_BRAND", "3": "BRAND_NAME", "4": "BRAND_NAME_IMP", "5": "VERSION", "6": "PRODUCT_ADJ", "7": "BRAND_ADJ", "8": "LOCATION", "9": "LOCATION_IMP"}}}}]}, {"name": "target", "struct": [{"name": "from", "dtype": "int32"}, {"name": "text", "dtype": "string"}, {"name": "to", "dtype": "int32"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "PRODUCT_NAME", "1": "PRODUCT_NAME_IMP", "2": "PRODUCT_NO_BRAND", "3": "BRAND_NAME", "4": "BRAND_NAME_IMP", "5": "VERSION", "6": "PRODUCT_ADJ", "7": "BRAND_ADJ", "8": "LOCATION", "9": "LOCATION_IMP"}}}}]}]}], "splits": [{"name": "tele", "num_bytes": 2739015, "num_examples": 2391}, {"name": "electro", "num_bytes": 125999, "num_examples": 382}, {"name": "cosmetics", "num_bytes": 1565263, "num_examples": 2384}, {"name": "banking", "num_bytes": 446944, "num_examples": 561}], "download_size": 8006167, "dataset_size": 4877221}, {"config_name": "all", "features": [{"name": "id", "dtype": "int32"}, {"name": "category", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "ner", "sequence": [{"name": "source", "struct": [{"name": "from", "dtype": "int32"}, {"name": "text", "dtype": "string"}, {"name": "to", "dtype": "int32"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "PRODUCT_NAME", "1": "PRODUCT_NAME_IMP", "2": "PRODUCT_NO_BRAND", "3": "BRAND_NAME", "4": "BRAND_NAME_IMP", "5": "VERSION", "6": "PRODUCT_ADJ", "7": "BRAND_ADJ", "8": "LOCATION", "9": "LOCATION_IMP"}}}}]}, {"name": "target", "struct": [{"name": "from", "dtype": "int32"}, {"name": "text", "dtype": "string"}, {"name": "to", "dtype": "int32"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "PRODUCT_NAME", "1": "PRODUCT_NAME_IMP", "2": "PRODUCT_NO_BRAND", "3": "BRAND_NAME", "4": "BRAND_NAME_IMP", "5": "VERSION", "6": "PRODUCT_ADJ", "7": "BRAND_ADJ", "8": "LOCATION", "9": "LOCATION_IMP"}}}}]}]}], "splits": [{"name": "train", "num_bytes": 4937658, "num_examples": 5718}], "download_size": 8006167, "dataset_size": 4937658}, {"config_name": "tele", "features": [{"name": "id", "dtype": "int32"}, {"name": "category", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "ner", "sequence": [{"name": "source", "struct": [{"name": "from", "dtype": "int32"}, {"name": "text", "dtype": "string"}, {"name": "to", "dtype": "int32"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "PRODUCT_NAME", "1": "PRODUCT_NAME_IMP", "2": "PRODUCT_NO_BRAND", "3": "BRAND_NAME", "4": "BRAND_NAME_IMP", "5": "VERSION", "6": "PRODUCT_ADJ", "7": "BRAND_ADJ", "8": "LOCATION", "9": "LOCATION_IMP"}}}}]}, {"name": "target", "struct": [{"name": "from", "dtype": "int32"}, {"name": "text", "dtype": "string"}, {"name": "to", "dtype": "int32"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "PRODUCT_NAME", "1": "PRODUCT_NAME_IMP", "2": "PRODUCT_NO_BRAND", "3": "BRAND_NAME", "4": "BRAND_NAME_IMP", "5": "VERSION", "6": "PRODUCT_ADJ", "7": "BRAND_ADJ", "8": "LOCATION", "9": "LOCATION_IMP"}}}}]}]}], "splits": [{"name": "train", "num_bytes": 2758147, "num_examples": 2391}], "download_size": 4569708, "dataset_size": 2758147}, {"config_name": "electro", "features": [{"name": "id", "dtype": "int32"}, {"name": "category", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "ner", "sequence": [{"name": "source", "struct": [{"name": "from", "dtype": "int32"}, {"name": "text", "dtype": "string"}, {"name": "to", "dtype": "int32"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "PRODUCT_NAME", "1": "PRODUCT_NAME_IMP", "2": "PRODUCT_NO_BRAND", "3": "BRAND_NAME", "4": "BRAND_NAME_IMP", "5": "VERSION", "6": "PRODUCT_ADJ", "7": "BRAND_ADJ", "8": "LOCATION", "9": "LOCATION_IMP"}}}}]}, {"name": "target", "struct": [{"name": "from", "dtype": "int32"}, {"name": "text", "dtype": "string"}, {"name": "to", "dtype": "int32"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "PRODUCT_NAME", "1": "PRODUCT_NAME_IMP", "2": "PRODUCT_NO_BRAND", "3": "BRAND_NAME", "4": "BRAND_NAME_IMP", "5": "VERSION", "6": "PRODUCT_ADJ", "7": "BRAND_ADJ", "8": "LOCATION", "9": "LOCATION_IMP"}}}}]}]}], "splits": [{"name": "train", "num_bytes": 130205, "num_examples": 382}], "download_size": 269917, "dataset_size": 130205}, {"config_name": "cosmetics", "features": [{"name": "id", "dtype": "int32"}, {"name": "category", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "ner", "sequence": [{"name": "source", "struct": [{"name": "from", "dtype": "int32"}, {"name": "text", "dtype": "string"}, {"name": "to", "dtype": "int32"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "PRODUCT_NAME", "1": "PRODUCT_NAME_IMP", "2": "PRODUCT_NO_BRAND", "3": "BRAND_NAME", "4": "BRAND_NAME_IMP", "5": "VERSION", "6": "PRODUCT_ADJ", "7": "BRAND_ADJ", "8": "LOCATION", "9": "LOCATION_IMP"}}}}]}, {"name": "target", "struct": [{"name": "from", "dtype": "int32"}, {"name": "text", "dtype": "string"}, {"name": "to", "dtype": "int32"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "PRODUCT_NAME", "1": "PRODUCT_NAME_IMP", "2": "PRODUCT_NO_BRAND", "3": "BRAND_NAME", "4": "BRAND_NAME_IMP", "5": "VERSION", "6": "PRODUCT_ADJ", "7": "BRAND_ADJ", "8": "LOCATION", "9": "LOCATION_IMP"}}}}]}]}], "splits": [{"name": "train", "num_bytes": 1596259, "num_examples": 2384}], "download_size": 2417388, "dataset_size": 1596259}, {"config_name": "banking", "features": [{"name": "id", "dtype": "int32"}, {"name": "category", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "ner", "sequence": [{"name": "source", "struct": [{"name": "from", "dtype": "int32"}, {"name": "text", "dtype": "string"}, {"name": "to", "dtype": "int32"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "PRODUCT_NAME", "1": "PRODUCT_NAME_IMP", "2": "PRODUCT_NO_BRAND", "3": "BRAND_NAME", "4": "BRAND_NAME_IMP", "5": "VERSION", "6": "PRODUCT_ADJ", "7": "BRAND_ADJ", "8": "LOCATION", "9": "LOCATION_IMP"}}}}]}, {"name": "target", "struct": [{"name": "from", "dtype": "int32"}, {"name": "text", "dtype": "string"}, {"name": "to", "dtype": "int32"}, {"name": "type", "dtype": {"class_label": {"names": {"0": "PRODUCT_NAME", "1": "PRODUCT_NAME_IMP", "2": "PRODUCT_NO_BRAND", "3": "BRAND_NAME", "4": "BRAND_NAME_IMP", "5": "VERSION", "6": "PRODUCT_ADJ", "7": "BRAND_ADJ", "8": "LOCATION", "9": "LOCATION_IMP"}}}}]}]}], "splits": [{"name": "train", "num_bytes": 453119, "num_examples": 561}], "download_size": 749154, "dataset_size": 453119}]}
2024-01-18T11:02:04+00:00
[]
[ "pl" ]
TAGS #task_categories-text-retrieval #task_ids-entity-linking-retrieval #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Polish #license-unknown #region-us
# Dataset Card for [Dataset Name] ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: bprec homepage - Repository: bprec repository - Paper: bprec paper - Leaderboard: - Point of Contact: ### Dataset Summary Brand-Product Relation Extraction Corpora in Polish ### Supported Tasks and Leaderboards NER, Entity linking ### Languages Polish ## Dataset Structure ### Data Instances ### Data Fields - id: int identifier of a text - text: string text, for example a consumer comment on the social media - ner: extracted entities and their relationship - source and target: a pair of entities identified in the text - from: int value representing starting character of the entity - text: string value with the entity text - to: int value representing end character of the entity - type: one of pre-identified entity types: - PRODUCT_NAME - PRODUCT_NAME_IMP - PRODUCT_NO_BRAND - BRAND_NAME - BRAND_NAME_IMP - VERSION - PRODUCT_ADJ - BRAND_ADJ - LOCATION - LOCATION_IMP ### Data Splits No train/validation/test split provided. Current dataset configurations point to 4 domain categories for the texts: - tele - electro - cosmetics - banking ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @kldarek for adding this dataset.
[ "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: bprec homepage\n- Repository: bprec repository\n- Paper: bprec paper\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nBrand-Product Relation Extraction Corpora in Polish", "### Supported Tasks and Leaderboards\n\nNER, Entity linking", "### Languages\n\nPolish", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- id: int identifier of a text\n- text: string text, for example a consumer comment on the social media\n- ner: extracted entities and their relationship\n - source and target: a pair of entities identified in the text\n - from: int value representing starting character of the entity\n - text: string value with the entity text\n - to: int value representing end character of the entity\n - type: one of pre-identified entity types:\n - PRODUCT_NAME\n - PRODUCT_NAME_IMP\n - PRODUCT_NO_BRAND\n - BRAND_NAME\n - BRAND_NAME_IMP\n - VERSION\n - PRODUCT_ADJ\n - BRAND_ADJ\n - LOCATION\n - LOCATION_IMP", "### Data Splits\n\nNo train/validation/test split provided. Current dataset configurations point to 4 domain categories for the texts:\n- tele\n- electro\n- cosmetics\n- banking", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @kldarek for adding this dataset." ]
[ "TAGS\n#task_categories-text-retrieval #task_ids-entity-linking-retrieval #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Polish #license-unknown #region-us \n", "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: bprec homepage\n- Repository: bprec repository\n- Paper: bprec paper\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nBrand-Product Relation Extraction Corpora in Polish", "### Supported Tasks and Leaderboards\n\nNER, Entity linking", "### Languages\n\nPolish", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- id: int identifier of a text\n- text: string text, for example a consumer comment on the social media\n- ner: extracted entities and their relationship\n - source and target: a pair of entities identified in the text\n - from: int value representing starting character of the entity\n - text: string value with the entity text\n - to: int value representing end character of the entity\n - type: one of pre-identified entity types:\n - PRODUCT_NAME\n - PRODUCT_NAME_IMP\n - PRODUCT_NO_BRAND\n - BRAND_NAME\n - BRAND_NAME_IMP\n - VERSION\n - PRODUCT_ADJ\n - BRAND_ADJ\n - LOCATION\n - LOCATION_IMP", "### Data Splits\n\nNo train/validation/test split provided. Current dataset configurations point to 4 domain categories for the texts:\n- tele\n- electro\n- cosmetics\n- banking", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @kldarek for adding this dataset." ]
[ 97, 10, 120, 36, 17, 17, 6, 6, 6, 163, 42, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 17 ]
[ "passage: TAGS\n#task_categories-text-retrieval #task_ids-entity-linking-retrieval #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Polish #license-unknown #region-us \n# Dataset Card for [Dataset Name]## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: bprec homepage\n- Repository: bprec repository\n- Paper: bprec paper\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\nBrand-Product Relation Extraction Corpora in Polish### Supported Tasks and Leaderboards\n\nNER, Entity linking### Languages\n\nPolish## Dataset Structure### Data Instances### Data Fields\n\n- id: int identifier of a text\n- text: string text, for example a consumer comment on the social media\n- ner: extracted entities and their relationship\n - source and target: a pair of entities identified in the text\n - from: int value representing starting character of the entity\n - text: string value with the entity text\n - to: int value representing end character of the entity\n - type: one of pre-identified entity types:\n - PRODUCT_NAME\n - PRODUCT_NAME_IMP\n - PRODUCT_NO_BRAND\n - BRAND_NAME\n - BRAND_NAME_IMP\n - VERSION\n - PRODUCT_ADJ\n - BRAND_ADJ\n - LOCATION\n - LOCATION_IMP" ]
42d29b59a18aec2be0986d24469bf67b6291cb27
# Dataset Card for "break_data" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/allenai/Break](https://github.com/allenai/Break) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 79.86 MB - **Size of the generated dataset:** 155.55 MB - **Total amount of disk used:** 235.39 MB ### Dataset Summary Break is a human annotated dataset of natural language questions and their Question Decomposition Meaning Representations (QDMRs). Break consists of 83,978 examples sampled from 10 question answering datasets over text, images and databases. This repository contains the Break dataset along with information on the exact data format. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### QDMR - **Size of downloaded dataset files:** 15.97 MB - **Size of the generated dataset:** 15.93 MB - **Total amount of disk used:** 31.90 MB An example of 'validation' looks as follows. ``` { "decomposition": "return flights ;return #1 from denver ;return #2 to philadelphia ;return #3 if available", "operators": "['select', 'filter', 'filter', 'filter']", "question_id": "ATIS_dev_0", "question_text": "what flights are available tomorrow from denver to philadelphia ", "split": "dev" } ``` #### QDMR-high-level - **Size of downloaded dataset files:** 15.97 MB - **Size of the generated dataset:** 6.54 MB - **Total amount of disk used:** 22.51 MB An example of 'train' looks as follows. ``` { "decomposition": "return ground transportation ;return #1 which is available ;return #2 from the pittsburgh airport ;return #3 to downtown ;return the cost of #4", "operators": "['select', 'filter', 'filter', 'filter', 'project']", "question_id": "ATIS_dev_102", "question_text": "what ground transportation is available from the pittsburgh airport to downtown and how much does it cost ", "split": "dev" } ``` #### QDMR-high-level-lexicon - **Size of downloaded dataset files:** 15.97 MB - **Size of the generated dataset:** 31.64 MB - **Total amount of disk used:** 47.61 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "allowed_tokens": "\"['higher than', 'same as', 'what ', 'and ', 'than ', 'at most', 'he', 'distinct', 'House', 'two', 'at least', 'or ', 'date', 'o...", "source": "What office, also held by a member of the Maine House of Representatives, did James K. Polk hold before he was president?" } ``` #### QDMR-lexicon - **Size of downloaded dataset files:** 15.97 MB - **Size of the generated dataset:** 77.19 MB - **Total amount of disk used:** 93.16 MB An example of 'validation' looks as follows. ``` This example was too long and was cropped: { "allowed_tokens": "\"['higher than', 'same as', 'what ', 'and ', 'than ', 'at most', 'distinct', 'two', 'at least', 'or ', 'date', 'on ', '@@14@@', ...", "source": "what flights are available tomorrow from denver to philadelphia " } ``` #### logical-forms - **Size of downloaded dataset files:** 15.97 MB - **Size of the generated dataset:** 24.25 MB - **Total amount of disk used:** 40.22 MB An example of 'train' looks as follows. ``` { "decomposition": "return ground transportation ;return #1 which is available ;return #2 from the pittsburgh airport ;return #3 to downtown ;return the cost of #4", "operators": "['select', 'filter', 'filter', 'filter', 'project']", "program": "some program", "question_id": "ATIS_dev_102", "question_text": "what ground transportation is available from the pittsburgh airport to downtown and how much does it cost ", "split": "dev" } ``` ### Data Fields The data fields are the same among all splits. #### QDMR - `question_id`: a `string` feature. - `question_text`: a `string` feature. - `decomposition`: a `string` feature. - `operators`: a `string` feature. - `split`: a `string` feature. #### QDMR-high-level - `question_id`: a `string` feature. - `question_text`: a `string` feature. - `decomposition`: a `string` feature. - `operators`: a `string` feature. - `split`: a `string` feature. #### QDMR-high-level-lexicon - `source`: a `string` feature. - `allowed_tokens`: a `string` feature. #### QDMR-lexicon - `source`: a `string` feature. - `allowed_tokens`: a `string` feature. #### logical-forms - `question_id`: a `string` feature. - `question_text`: a `string` feature. - `decomposition`: a `string` feature. - `operators`: a `string` feature. - `split`: a `string` feature. - `program`: a `string` feature. ### Data Splits | name |train|validation|test| |-----------------------|----:|---------:|---:| |QDMR |44321| 7760|8069| |QDMR-high-level |17503| 3130|3195| |QDMR-high-level-lexicon|17503| 3130|3195| |QDMR-lexicon |44321| 7760|8069| |logical-forms |44098| 7719|8006| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @article{Wolfson2020Break, title={Break It Down: A Question Understanding Benchmark}, author={Wolfson, Tomer and Geva, Mor and Gupta, Ankit and Gardner, Matt and Goldberg, Yoav and Deutch, Daniel and Berant, Jonathan}, journal={Transactions of the Association for Computational Linguistics}, year={2020}, } ``` ### Contributions Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
break_data
[ "task_categories:text2text-generation", "task_ids:open-domain-abstractive-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "task_ids": ["open-domain-abstractive-qa"], "paperswithcode_id": "break", "pretty_name": "BREAK", "dataset_info": [{"config_name": "QDMR", "features": [{"name": "question_id", "dtype": "string"}, {"name": "question_text", "dtype": "string"}, {"name": "decomposition", "dtype": "string"}, {"name": "operators", "dtype": "string"}, {"name": "split", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12757200, "num_examples": 44321}, {"name": "validation", "num_bytes": 2231632, "num_examples": 7760}, {"name": "test", "num_bytes": 894558, "num_examples": 8069}], "download_size": 5175508, "dataset_size": 15883390}, {"config_name": "QDMR-high-level", "features": [{"name": "question_id", "dtype": "string"}, {"name": "question_text", "dtype": "string"}, {"name": "decomposition", "dtype": "string"}, {"name": "operators", "dtype": "string"}, {"name": "split", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5134938, "num_examples": 17503}, {"name": "validation", "num_bytes": 912408, "num_examples": 3130}, {"name": "test", "num_bytes": 479919, "num_examples": 3195}], "download_size": 3113187, "dataset_size": 6527265}, {"config_name": "QDMR-high-level-lexicon", "features": [{"name": "source", "dtype": "string"}, {"name": "allowed_tokens", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 23227946, "num_examples": 17503}, {"name": "validation", "num_bytes": 4157495, "num_examples": 3130}, {"name": "test", "num_bytes": 4239547, "num_examples": 3195}], "download_size": 5663924, "dataset_size": 31624988}, {"config_name": "QDMR-lexicon", "features": [{"name": "source", "dtype": "string"}, {"name": "allowed_tokens", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 56896433, "num_examples": 44321}, {"name": "validation", "num_bytes": 9934015, "num_examples": 7760}, {"name": "test", "num_bytes": 10328787, "num_examples": 8069}], "download_size": 10818266, "dataset_size": 77159235}, {"config_name": "logical-forms", "features": [{"name": "question_id", "dtype": "string"}, {"name": "question_text", "dtype": "string"}, {"name": "decomposition", "dtype": "string"}, {"name": "operators", "dtype": "string"}, {"name": "split", "dtype": "string"}, {"name": "program", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 19783061, "num_examples": 44098}, {"name": "validation", "num_bytes": 3498114, "num_examples": 7719}, {"name": "test", "num_bytes": 920007, "num_examples": 8006}], "download_size": 7572815, "dataset_size": 24201182}], "configs": [{"config_name": "QDMR", "data_files": [{"split": "train", "path": "QDMR/train-*"}, {"split": "validation", "path": "QDMR/validation-*"}, {"split": "test", "path": "QDMR/test-*"}]}, {"config_name": "QDMR-high-level", "data_files": [{"split": "train", "path": "QDMR-high-level/train-*"}, {"split": "validation", "path": "QDMR-high-level/validation-*"}, {"split": "test", "path": "QDMR-high-level/test-*"}]}, {"config_name": "QDMR-high-level-lexicon", "data_files": [{"split": "train", "path": "QDMR-high-level-lexicon/train-*"}, {"split": "validation", "path": "QDMR-high-level-lexicon/validation-*"}, {"split": "test", "path": "QDMR-high-level-lexicon/test-*"}]}, {"config_name": "QDMR-lexicon", "data_files": [{"split": "train", "path": "QDMR-lexicon/train-*"}, {"split": "validation", "path": "QDMR-lexicon/validation-*"}, {"split": "test", "path": "QDMR-lexicon/test-*"}]}, {"config_name": "logical-forms", "data_files": [{"split": "train", "path": "logical-forms/train-*"}, {"split": "validation", "path": "logical-forms/validation-*"}, {"split": "test", "path": "logical-forms/test-*"}]}]}
2024-01-11T07:39:12+00:00
[]
[ "en" ]
TAGS #task_categories-text2text-generation #task_ids-open-domain-abstractive-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #region-us
Dataset Card for "break\_data" ============================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: * Point of Contact: * Size of downloaded dataset files: 79.86 MB * Size of the generated dataset: 155.55 MB * Total amount of disk used: 235.39 MB ### Dataset Summary Break is a human annotated dataset of natural language questions and their Question Decomposition Meaning Representations (QDMRs). Break consists of 83,978 examples sampled from 10 question answering datasets over text, images and databases. This repository contains the Break dataset along with information on the exact data format. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### QDMR * Size of downloaded dataset files: 15.97 MB * Size of the generated dataset: 15.93 MB * Total amount of disk used: 31.90 MB An example of 'validation' looks as follows. #### QDMR-high-level * Size of downloaded dataset files: 15.97 MB * Size of the generated dataset: 6.54 MB * Total amount of disk used: 22.51 MB An example of 'train' looks as follows. #### QDMR-high-level-lexicon * Size of downloaded dataset files: 15.97 MB * Size of the generated dataset: 31.64 MB * Total amount of disk used: 47.61 MB An example of 'train' looks as follows. #### QDMR-lexicon * Size of downloaded dataset files: 15.97 MB * Size of the generated dataset: 77.19 MB * Total amount of disk used: 93.16 MB An example of 'validation' looks as follows. #### logical-forms * Size of downloaded dataset files: 15.97 MB * Size of the generated dataset: 24.25 MB * Total amount of disk used: 40.22 MB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### QDMR * 'question\_id': a 'string' feature. * 'question\_text': a 'string' feature. * 'decomposition': a 'string' feature. * 'operators': a 'string' feature. * 'split': a 'string' feature. #### QDMR-high-level * 'question\_id': a 'string' feature. * 'question\_text': a 'string' feature. * 'decomposition': a 'string' feature. * 'operators': a 'string' feature. * 'split': a 'string' feature. #### QDMR-high-level-lexicon * 'source': a 'string' feature. * 'allowed\_tokens': a 'string' feature. #### QDMR-lexicon * 'source': a 'string' feature. * 'allowed\_tokens': a 'string' feature. #### logical-forms * 'question\_id': a 'string' feature. * 'question\_text': a 'string' feature. * 'decomposition': a 'string' feature. * 'operators': a 'string' feature. * 'split': a 'string' feature. * 'program': a 'string' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @patrickvonplaten, @lewtun, @thomwolf for adding this dataset.
[ "### Dataset Summary\n\n\nBreak is a human annotated dataset of natural language questions and their Question Decomposition Meaning Representations\n(QDMRs). Break consists of 83,978 examples sampled from 10 question answering datasets over text, images and databases.\nThis repository contains the Break dataset along with information on the exact data format.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### QDMR\n\n\n* Size of downloaded dataset files: 15.97 MB\n* Size of the generated dataset: 15.93 MB\n* Total amount of disk used: 31.90 MB\n\n\nAn example of 'validation' looks as follows.", "#### QDMR-high-level\n\n\n* Size of downloaded dataset files: 15.97 MB\n* Size of the generated dataset: 6.54 MB\n* Total amount of disk used: 22.51 MB\n\n\nAn example of 'train' looks as follows.", "#### QDMR-high-level-lexicon\n\n\n* Size of downloaded dataset files: 15.97 MB\n* Size of the generated dataset: 31.64 MB\n* Total amount of disk used: 47.61 MB\n\n\nAn example of 'train' looks as follows.", "#### QDMR-lexicon\n\n\n* Size of downloaded dataset files: 15.97 MB\n* Size of the generated dataset: 77.19 MB\n* Total amount of disk used: 93.16 MB\n\n\nAn example of 'validation' looks as follows.", "#### logical-forms\n\n\n* Size of downloaded dataset files: 15.97 MB\n* Size of the generated dataset: 24.25 MB\n* Total amount of disk used: 40.22 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### QDMR\n\n\n* 'question\\_id': a 'string' feature.\n* 'question\\_text': a 'string' feature.\n* 'decomposition': a 'string' feature.\n* 'operators': a 'string' feature.\n* 'split': a 'string' feature.", "#### QDMR-high-level\n\n\n* 'question\\_id': a 'string' feature.\n* 'question\\_text': a 'string' feature.\n* 'decomposition': a 'string' feature.\n* 'operators': a 'string' feature.\n* 'split': a 'string' feature.", "#### QDMR-high-level-lexicon\n\n\n* 'source': a 'string' feature.\n* 'allowed\\_tokens': a 'string' feature.", "#### QDMR-lexicon\n\n\n* 'source': a 'string' feature.\n* 'allowed\\_tokens': a 'string' feature.", "#### logical-forms\n\n\n* 'question\\_id': a 'string' feature.\n* 'question\\_text': a 'string' feature.\n* 'decomposition': a 'string' feature.\n* 'operators': a 'string' feature.\n* 'split': a 'string' feature.\n* 'program': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @patrickvonplaten, @lewtun, @thomwolf for adding this dataset." ]
[ "TAGS\n#task_categories-text2text-generation #task_ids-open-domain-abstractive-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #region-us \n", "### Dataset Summary\n\n\nBreak is a human annotated dataset of natural language questions and their Question Decomposition Meaning Representations\n(QDMRs). Break consists of 83,978 examples sampled from 10 question answering datasets over text, images and databases.\nThis repository contains the Break dataset along with information on the exact data format.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### QDMR\n\n\n* Size of downloaded dataset files: 15.97 MB\n* Size of the generated dataset: 15.93 MB\n* Total amount of disk used: 31.90 MB\n\n\nAn example of 'validation' looks as follows.", "#### QDMR-high-level\n\n\n* Size of downloaded dataset files: 15.97 MB\n* Size of the generated dataset: 6.54 MB\n* Total amount of disk used: 22.51 MB\n\n\nAn example of 'train' looks as follows.", "#### QDMR-high-level-lexicon\n\n\n* Size of downloaded dataset files: 15.97 MB\n* Size of the generated dataset: 31.64 MB\n* Total amount of disk used: 47.61 MB\n\n\nAn example of 'train' looks as follows.", "#### QDMR-lexicon\n\n\n* Size of downloaded dataset files: 15.97 MB\n* Size of the generated dataset: 77.19 MB\n* Total amount of disk used: 93.16 MB\n\n\nAn example of 'validation' looks as follows.", "#### logical-forms\n\n\n* Size of downloaded dataset files: 15.97 MB\n* Size of the generated dataset: 24.25 MB\n* Total amount of disk used: 40.22 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### QDMR\n\n\n* 'question\\_id': a 'string' feature.\n* 'question\\_text': a 'string' feature.\n* 'decomposition': a 'string' feature.\n* 'operators': a 'string' feature.\n* 'split': a 'string' feature.", "#### QDMR-high-level\n\n\n* 'question\\_id': a 'string' feature.\n* 'question\\_text': a 'string' feature.\n* 'decomposition': a 'string' feature.\n* 'operators': a 'string' feature.\n* 'split': a 'string' feature.", "#### QDMR-high-level-lexicon\n\n\n* 'source': a 'string' feature.\n* 'allowed\\_tokens': a 'string' feature.", "#### QDMR-lexicon\n\n\n* 'source': a 'string' feature.\n* 'allowed\\_tokens': a 'string' feature.", "#### logical-forms\n\n\n* 'question\\_id': a 'string' feature.\n* 'question\\_text': a 'string' feature.\n* 'decomposition': a 'string' feature.\n* 'operators': a 'string' feature.\n* 'split': a 'string' feature.\n* 'program': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @patrickvonplaten, @lewtun, @thomwolf for adding this dataset." ]
[ 98, 82, 10, 11, 6, 52, 55, 60, 58, 54, 17, 72, 76, 40, 36, 85, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 28 ]
[ "passage: TAGS\n#task_categories-text2text-generation #task_ids-open-domain-abstractive-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #region-us \n### Dataset Summary\n\n\nBreak is a human annotated dataset of natural language questions and their Question Decomposition Meaning Representations\n(QDMRs). Break consists of 83,978 examples sampled from 10 question answering datasets over text, images and databases.\nThis repository contains the Break dataset along with information on the exact data format.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### QDMR\n\n\n* Size of downloaded dataset files: 15.97 MB\n* Size of the generated dataset: 15.93 MB\n* Total amount of disk used: 31.90 MB\n\n\nAn example of 'validation' looks as follows.#### QDMR-high-level\n\n\n* Size of downloaded dataset files: 15.97 MB\n* Size of the generated dataset: 6.54 MB\n* Total amount of disk used: 22.51 MB\n\n\nAn example of 'train' looks as follows.#### QDMR-high-level-lexicon\n\n\n* Size of downloaded dataset files: 15.97 MB\n* Size of the generated dataset: 31.64 MB\n* Total amount of disk used: 47.61 MB\n\n\nAn example of 'train' looks as follows.#### QDMR-lexicon\n\n\n* Size of downloaded dataset files: 15.97 MB\n* Size of the generated dataset: 77.19 MB\n* Total amount of disk used: 93.16 MB\n\n\nAn example of 'validation' looks as follows.#### logical-forms\n\n\n* Size of downloaded dataset files: 15.97 MB\n* Size of the generated dataset: 24.25 MB\n* Total amount of disk used: 40.22 MB\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits." ]
3475bc217e5241f9a5c833b2f8ae9b74a2d7e44d
# Dataset Card for BrWaC ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [BrWaC homepage](https://www.inf.ufrgs.br/pln/wiki/index.php?title=BrWaC) - **Repository:** [BrWaC repository](https://www.inf.ufrgs.br/pln/wiki/index.php?title=BrWaC) - **Paper:** [The brWaC Corpus: A New Open Resource for Brazilian Portuguese](https://www.aclweb.org/anthology/L18-1686/) - **Point of Contact:** [Jorge A. Wagner Filho](mailto:jawfilho@inf.ufrgs.br) ### Dataset Summary The BrWaC (Brazilian Portuguese Web as Corpus) is a large corpus constructed following the Wacky framework, which was made public for research purposes. The current corpus version, released in January 2017, is composed by 3.53 million documents, 2.68 billion tokens and 5.79 million types. Please note that this resource is available solely for academic research purposes, and you agreed not to use it for any commercial applications. Manually download at https://www.inf.ufrgs.br/pln/wiki/index.php?title=BrWaC ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Portuguese ## Dataset Structure ### Data Instances An example from the BrWaC dataset looks as follows: ``` { "doc_id": "netg-1afc73", "text": { "paragraphs": [ [ "Conteúdo recente" ], [ "ESPUMA MARROM CHAMADA \"NINGUÉM MERECE\"" ], [ "31 de Agosto de 2015, 7:07 , por paulo soavinski - | No one following this article yet." ], [ "Visualizado 202 vezes" ], [ "JORNAL ELETRÔNICO DA ILHA DO MEL" ], [ "Uma espuma marrom escuro tem aparecido com frequência na Praia de Fora.", "Na faixa de areia ela aparece disseminada e não chama muito a atenção.", "No Buraco do Aipo, com muitas pedras, ela aparece concentrada.", "É fácil saber que esta espuma estranha está lá, quando venta.", "Pequenos algodões de espuma começam a flutuar no espaço, pertinho da Praia do Saquinho.", "Quem pode ajudar na coleta deste material, envio a laboratório renomado e pagamento de análises, favor entrar em contato com o site." ] ] }, "title": "ESPUMA MARROM CHAMADA ‟NINGUÉM MERECE‟ - paulo soavinski", "uri": "http://blogoosfero.cc/ilhadomel/pousadasilhadomel.com.br/espuma-marrom-chamada-ninguem-merece" } ``` ### Data Fields - `doc_id`: The document ID - `title`: The document title - `uri`: URI where the document was extracted from - `text`: A list of document paragraphs (with a list of sentences in it as a list of strings) ### Data Splits The data is only split into train set with size of 3530796 samples. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{wagner2018brwac, title={The brwac corpus: A new open resource for brazilian portuguese}, author={Wagner Filho, Jorge A and Wilkens, Rodrigo and Idiart, Marco and Villavicencio, Aline}, booktitle={Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)}, year={2018} } ``` ### Contributions Thanks to [@jonatasgrosman](https://github.com/jonatasgrosman) for adding this dataset.
brwac
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:pt", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["pt"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "paperswithcode_id": "brwac", "pretty_name": "BrWaC", "dataset_info": {"features": [{"name": "doc_id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "uri", "dtype": "string"}, {"name": "text", "sequence": [{"name": "paragraphs", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 18828421452, "num_examples": 3530796}], "download_size": 0, "dataset_size": 18828421452}}
2024-01-18T11:02:06+00:00
[]
[ "pt" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-Portuguese #license-unknown #region-us
# Dataset Card for BrWaC ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: BrWaC homepage - Repository: BrWaC repository - Paper: The brWaC Corpus: A New Open Resource for Brazilian Portuguese - Point of Contact: Jorge A. Wagner Filho ### Dataset Summary The BrWaC (Brazilian Portuguese Web as Corpus) is a large corpus constructed following the Wacky framework, which was made public for research purposes. The current corpus version, released in January 2017, is composed by 3.53 million documents, 2.68 billion tokens and 5.79 million types. Please note that this resource is available solely for academic research purposes, and you agreed not to use it for any commercial applications. Manually download at URL ### Supported Tasks and Leaderboards ### Languages Portuguese ## Dataset Structure ### Data Instances An example from the BrWaC dataset looks as follows: ### Data Fields - 'doc_id': The document ID - 'title': The document title - 'uri': URI where the document was extracted from - 'text': A list of document paragraphs (with a list of sentences in it as a list of strings) ### Data Splits The data is only split into train set with size of 3530796 samples. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @jonatasgrosman for adding this dataset.
[ "# Dataset Card for BrWaC", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: BrWaC homepage\n- Repository: BrWaC repository\n- Paper: The brWaC Corpus: A New Open Resource for Brazilian Portuguese\n- Point of Contact: Jorge A. Wagner Filho", "### Dataset Summary\n\nThe BrWaC (Brazilian Portuguese Web as Corpus) is a large corpus constructed following the Wacky framework, \nwhich was made public for research purposes. The current corpus version, released in January 2017, is composed by \n3.53 million documents, 2.68 billion tokens and 5.79 million types. Please note that this resource is available \nsolely for academic research purposes, and you agreed not to use it for any commercial applications.\nManually download at URL", "### Supported Tasks and Leaderboards", "### Languages\n\nPortuguese", "## Dataset Structure", "### Data Instances\n\nAn example from the BrWaC dataset looks as follows:", "### Data Fields\n\n- 'doc_id': The document ID\n- 'title': The document title\n- 'uri': URI where the document was extracted from\n- 'text': A list of document paragraphs (with a list of sentences in it as a list of strings)", "### Data Splits\n\nThe data is only split into train set with size of 3530796 samples.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @jonatasgrosman for adding this dataset." ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-Portuguese #license-unknown #region-us \n", "# Dataset Card for BrWaC", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: BrWaC homepage\n- Repository: BrWaC repository\n- Paper: The brWaC Corpus: A New Open Resource for Brazilian Portuguese\n- Point of Contact: Jorge A. Wagner Filho", "### Dataset Summary\n\nThe BrWaC (Brazilian Portuguese Web as Corpus) is a large corpus constructed following the Wacky framework, \nwhich was made public for research purposes. The current corpus version, released in January 2017, is composed by \n3.53 million documents, 2.68 billion tokens and 5.79 million types. Please note that this resource is available \nsolely for academic research purposes, and you agreed not to use it for any commercial applications.\nManually download at URL", "### Supported Tasks and Leaderboards", "### Languages\n\nPortuguese", "## Dataset Structure", "### Data Instances\n\nAn example from the BrWaC dataset looks as follows:", "### Data Fields\n\n- 'doc_id': The document ID\n- 'title': The document title\n- 'uri': URI where the document was extracted from\n- 'text': A list of document paragraphs (with a list of sentences in it as a list of strings)", "### Data Splits\n\nThe data is only split into train set with size of 3530796 samples.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @jonatasgrosman for adding this dataset." ]
[ 113, 8, 120, 52, 104, 10, 7, 6, 20, 64, 23, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 18 ]
[ "passage: TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-Portuguese #license-unknown #region-us \n# Dataset Card for BrWaC## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: BrWaC homepage\n- Repository: BrWaC repository\n- Paper: The brWaC Corpus: A New Open Resource for Brazilian Portuguese\n- Point of Contact: Jorge A. Wagner Filho### Dataset Summary\n\nThe BrWaC (Brazilian Portuguese Web as Corpus) is a large corpus constructed following the Wacky framework, \nwhich was made public for research purposes. The current corpus version, released in January 2017, is composed by \n3.53 million documents, 2.68 billion tokens and 5.79 million types. Please note that this resource is available \nsolely for academic research purposes, and you agreed not to use it for any commercial applications.\nManually download at URL### Supported Tasks and Leaderboards### Languages\n\nPortuguese## Dataset Structure### Data Instances\n\nAn example from the BrWaC dataset looks as follows:### Data Fields\n\n- 'doc_id': The document ID\n- 'title': The document title\n- 'uri': URI where the document was extracted from\n- 'text': A list of document paragraphs (with a list of sentences in it as a list of strings)" ]
ed6539dc16c18c481ff3574376b79d7a83a57fb2
# Dataset Card for Business Scene Dialogue ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Github](https://raw.githubusercontent.com/tsuruoka-lab/BSD/) - **Repository:** [Github](https://raw.githubusercontent.com/tsuruoka-lab/BSD/) - **Paper:** [Rikters et al., 2019](https://www.aclweb.org/anthology/D19-5204) - **Leaderboard:** - **Point of Contact:** Matīss Rikters ### Dataset Summary This is the Business Scene Dialogue (BSD) dataset, a Japanese-English parallel corpus containing written conversations in various business scenarios. The dataset was constructed in 3 steps: 1) selecting business scenes, 2) writing monolingual conversation scenarios according to the selected scenes, and 3) translating the scenarios into the other language. Half of the monolingual scenarios were written in Japanese and the other half were written in English. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English, Japanese. ## Dataset Structure ### Data Instances Each instance contains a conversation identifier, a sentence number that indicates its position within the conversation, speaker name in English and Japanese, text in English and Japanese, original language, scene of the scenario (tag), and title of the scenario (title). ```python { "id": "190315_E004_13", "no": 14, "speaker": "Mr. Sam Lee", "ja_speaker": "サム リーさん", "en_sentence": "Would you guys consider a different scheme?", "ja_sentence": "別の事業案も考慮されますか?", "original_language": "en", "tag": "phone call", "title": "Phone: Review spec and scheme" } ``` ### Data Fields - id: dialogue identifier - no: sentence pair number within a dialogue - en_speaker: speaker name in English - ja_speaker: speaker name in Japanese - en_sentence: sentence in English - ja_sentence: sentence in Japanese - original_language: language in which monolingual scenario was written - tag: scenario - title: scenario title ### Data Splits - There are a total of 24171 sentences / 808 business scenarios. - Train: 20000 sentences / 670 scenarios - Dev: 2051 sentences / 69 scenarios - Test: 2120 sentences / 69 scenarios ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information This dataset was released under the Creative Commons Attribution-NonCommercial-ShareAlike (CC BY-NC-SA) license. ### Citation Information ``` @inproceedings{rikters-etal-2019-designing, title = "Designing the Business Conversation Corpus", author = "Rikters, Mat{\=\i}ss and Ri, Ryokan and Li, Tong and Nakazawa, Toshiaki", booktitle = "Proceedings of the 6th Workshop on Asian Translation", month = nov, year = "2019", address = "Hong Kong, China", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D19-5204", doi = "10.18653/v1/D19-5204", pages = "54--61" } ``` ### Contributions Thanks to [@j-chim](https://github.com/j-chim) for adding this dataset.
bsd_ja_en
[ "task_categories:translation", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:translation", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "language:ja", "license:cc-by-nc-sa-4.0", "business-conversations-translation", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en", "ja"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["translation"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "paperswithcode_id": "business-scene-dialogue", "pretty_name": "Business Scene Dialogue", "tags": ["business-conversations-translation"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "tag", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "original_language", "dtype": "string"}, {"name": "no", "dtype": "int32"}, {"name": "en_speaker", "dtype": "string"}, {"name": "ja_speaker", "dtype": "string"}, {"name": "en_sentence", "dtype": "string"}, {"name": "ja_sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4778291, "num_examples": 20000}, {"name": "test", "num_bytes": 492986, "num_examples": 2120}, {"name": "validation", "num_bytes": 477935, "num_examples": 2051}], "download_size": 1843443, "dataset_size": 5749212}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}]}
2024-01-11T07:36:44+00:00
[]
[ "en", "ja" ]
TAGS #task_categories-translation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-translation #size_categories-10K<n<100K #source_datasets-original #language-English #language-Japanese #license-cc-by-nc-sa-4.0 #business-conversations-translation #region-us
# Dataset Card for Business Scene Dialogue ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: Github - Repository: Github - Paper: Rikters et al., 2019 - Leaderboard: - Point of Contact: Matīss Rikters ### Dataset Summary This is the Business Scene Dialogue (BSD) dataset, a Japanese-English parallel corpus containing written conversations in various business scenarios. The dataset was constructed in 3 steps: 1) selecting business scenes, 2) writing monolingual conversation scenarios according to the selected scenes, and 3) translating the scenarios into the other language. Half of the monolingual scenarios were written in Japanese and the other half were written in English. ### Supported Tasks and Leaderboards ### Languages English, Japanese. ## Dataset Structure ### Data Instances Each instance contains a conversation identifier, a sentence number that indicates its position within the conversation, speaker name in English and Japanese, text in English and Japanese, original language, scene of the scenario (tag), and title of the scenario (title). ### Data Fields - id: dialogue identifier - no: sentence pair number within a dialogue - en_speaker: speaker name in English - ja_speaker: speaker name in Japanese - en_sentence: sentence in English - ja_sentence: sentence in Japanese - original_language: language in which monolingual scenario was written - tag: scenario - title: scenario title ### Data Splits - There are a total of 24171 sentences / 808 business scenarios. - Train: 20000 sentences / 670 scenarios - Dev: 2051 sentences / 69 scenarios - Test: 2120 sentences / 69 scenarios ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators ### Licensing Information This dataset was released under the Creative Commons Attribution-NonCommercial-ShareAlike (CC BY-NC-SA) license. ### Contributions Thanks to @j-chim for adding this dataset.
[ "# Dataset Card for Business Scene Dialogue", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Github\n- Repository: Github\n- Paper: Rikters et al., 2019\n- Leaderboard:\n- Point of Contact: Matīss Rikters", "### Dataset Summary\nThis is the Business Scene Dialogue (BSD) dataset, \na Japanese-English parallel corpus containing written conversations\nin various business scenarios. \n\nThe dataset was constructed in 3 steps: \n 1) selecting business scenes, \n 2) writing monolingual conversation scenarios according to the selected scenes, and \n 3) translating the scenarios into the other language. \n\nHalf of the monolingual scenarios were written in Japanese \nand the other half were written in English.", "### Supported Tasks and Leaderboards", "### Languages\nEnglish, Japanese.", "## Dataset Structure", "### Data Instances\nEach instance contains a conversation identifier, a sentence number that indicates its\nposition within the conversation, speaker name in English and Japanese, \ntext in English and Japanese, original language, scene of the scenario (tag), \nand title of the scenario (title).", "### Data Fields\n- id: dialogue identifier\n- no: sentence pair number within a dialogue\n- en_speaker: speaker name in English\n- ja_speaker: speaker name in Japanese\n- en_sentence: sentence in English\n- ja_sentence: sentence in Japanese\n- original_language: language in which monolingual scenario was written\n- tag: scenario\n- title: scenario title", "### Data Splits\n- There are a total of 24171 sentences / 808 business scenarios.\n- Train: 20000 sentences / 670 scenarios\n- Dev: 2051 sentences / 69 scenarios\n- Test: 2120 sentences / 69 scenarios", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\nDataset provided for research purposes only. Please check dataset license for additional information.", "## Additional Information", "### Dataset Curators", "### Licensing Information\nThis dataset was released under the Creative Commons Attribution-NonCommercial-ShareAlike (CC BY-NC-SA) license.", "### Contributions\n\nThanks to @j-chim for adding this dataset." ]
[ "TAGS\n#task_categories-translation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-translation #size_categories-10K<n<100K #source_datasets-original #language-English #language-Japanese #license-cc-by-nc-sa-4.0 #business-conversations-translation #region-us \n", "# Dataset Card for Business Scene Dialogue", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Github\n- Repository: Github\n- Paper: Rikters et al., 2019\n- Leaderboard:\n- Point of Contact: Matīss Rikters", "### Dataset Summary\nThis is the Business Scene Dialogue (BSD) dataset, \na Japanese-English parallel corpus containing written conversations\nin various business scenarios. \n\nThe dataset was constructed in 3 steps: \n 1) selecting business scenes, \n 2) writing monolingual conversation scenarios according to the selected scenes, and \n 3) translating the scenarios into the other language. \n\nHalf of the monolingual scenarios were written in Japanese \nand the other half were written in English.", "### Supported Tasks and Leaderboards", "### Languages\nEnglish, Japanese.", "## Dataset Structure", "### Data Instances\nEach instance contains a conversation identifier, a sentence number that indicates its\nposition within the conversation, speaker name in English and Japanese, \ntext in English and Japanese, original language, scene of the scenario (tag), \nand title of the scenario (title).", "### Data Fields\n- id: dialogue identifier\n- no: sentence pair number within a dialogue\n- en_speaker: speaker name in English\n- ja_speaker: speaker name in Japanese\n- en_sentence: sentence in English\n- ja_sentence: sentence in Japanese\n- original_language: language in which monolingual scenario was written\n- tag: scenario\n- title: scenario title", "### Data Splits\n- There are a total of 24171 sentences / 808 business scenarios.\n- Train: 20000 sentences / 670 scenarios\n- Dev: 2051 sentences / 69 scenarios\n- Test: 2120 sentences / 69 scenarios", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\nDataset provided for research purposes only. Please check dataset license for additional information.", "## Additional Information", "### Dataset Curators", "### Licensing Information\nThis dataset was released under the Creative Commons Attribution-NonCommercial-ShareAlike (CC BY-NC-SA) license.", "### Contributions\n\nThanks to @j-chim for adding this dataset." ]
[ 98, 9, 120, 42, 103, 10, 8, 6, 59, 84, 56, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 25, 5, 6, 33, 18 ]
[ "passage: TAGS\n#task_categories-translation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-translation #size_categories-10K<n<100K #source_datasets-original #language-English #language-Japanese #license-cc-by-nc-sa-4.0 #business-conversations-translation #region-us \n# Dataset Card for Business Scene Dialogue## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: Github\n- Repository: Github\n- Paper: Rikters et al., 2019\n- Leaderboard:\n- Point of Contact: Matīss Rikters### Dataset Summary\nThis is the Business Scene Dialogue (BSD) dataset, \na Japanese-English parallel corpus containing written conversations\nin various business scenarios. \n\nThe dataset was constructed in 3 steps: \n 1) selecting business scenes, \n 2) writing monolingual conversation scenarios according to the selected scenes, and \n 3) translating the scenarios into the other language. \n\nHalf of the monolingual scenarios were written in Japanese \nand the other half were written in English.### Supported Tasks and Leaderboards### Languages\nEnglish, Japanese.## Dataset Structure### Data Instances\nEach instance contains a conversation identifier, a sentence number that indicates its\nposition within the conversation, speaker name in English and Japanese, \ntext in English and Japanese, original language, scene of the scenario (tag), \nand title of the scenario (title)." ]
1dbdabb101d60471e705f84ae821cdb804399dd7
# Dataset Card for BsWac ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://nlp.ffzg.hr/resources/corpora/bswac/ - **Repository:** https://www.clarin.si/repository/xmlui/handle/11356/1062 - **Paper:** http://nlp.ffzg.hr/data/publications/nljubesi/ljubesic14-bs.pdf - **Leaderboard:** - **Point of Contact:** [Nikola Ljubešič](mailto:nikola.ljubesic@ffzg.hr) ### Dataset Summary The Bosnian web corpus bsWaC was built by crawling the .ba top-level domain in 2014. The corpus was near-deduplicated on paragraph level, normalised via diacritic restoration, morphosyntactically annotated and lemmatised. The corpus is shuffled by paragraphs. Each paragraph contains metadata on the URL, domain and language identification (Bosnian vs. Croatian vs. Serbian). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Dataset is monolingual in Bosnian language. ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Dataset is under the [CC-BY-SA 3.0](http://creativecommons.org/licenses/by-sa/3.0/) license. ### Citation Information ``` @misc{11356/1062, title = {Bosnian web corpus {bsWaC} 1.1}, author = {Ljube{\v s}i{\'c}, Nikola and Klubi{\v c}ka, Filip}, url = {http://hdl.handle.net/11356/1062}, note = {Slovenian language resource repository {CLARIN}.{SI}}, copyright = {Creative Commons - Attribution-{ShareAlike} 4.0 International ({CC} {BY}-{SA} 4.0)}, year = {2016} } ``` ### Contributions Thanks to [@IvanZidov](https://github.com/IvanZidov) for adding this dataset.
bswac
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100M<n<1B", "source_datasets:original", "language:bs", "license:cc-by-sa-3.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["bs"], "license": ["cc-by-sa-3.0"], "multilinguality": ["monolingual"], "size_categories": ["100M<n<1B"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "BsWac", "dataset_info": {"config_name": "bswac", "features": [{"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8801535375, "num_examples": 354581267}], "download_size": 1988514951, "dataset_size": 8801535375}}
2024-01-11T12:54:46+00:00
[]
[ "bs" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100M<n<1B #source_datasets-original #language-Bosnian #license-cc-by-sa-3.0 #region-us
# Dataset Card for BsWac ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL - Leaderboard: - Point of Contact: Nikola Ljubešič ### Dataset Summary The Bosnian web corpus bsWaC was built by crawling the .ba top-level domain in 2014. The corpus was near-deduplicated on paragraph level, normalised via diacritic restoration, morphosyntactically annotated and lemmatised. The corpus is shuffled by paragraphs. Each paragraph contains metadata on the URL, domain and language identification (Bosnian vs. Croatian vs. Serbian). ### Supported Tasks and Leaderboards ### Languages Dataset is monolingual in Bosnian language. ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information Dataset is under the CC-BY-SA 3.0 license. ### Contributions Thanks to @IvanZidov for adding this dataset.
[ "# Dataset Card for BsWac", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact: Nikola Ljubešič", "### Dataset Summary\n\nThe Bosnian web corpus bsWaC was built by crawling the .ba top-level domain in 2014. The corpus was near-deduplicated on paragraph level, normalised via diacritic restoration, morphosyntactically annotated and lemmatised. The corpus is shuffled by paragraphs. Each paragraph contains metadata on the URL, domain and language identification (Bosnian vs. Croatian vs. Serbian).", "### Supported Tasks and Leaderboards", "### Languages\n\nDataset is monolingual in Bosnian language.", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nDataset is under the CC-BY-SA 3.0 license.", "### Contributions\n\nThanks to @IvanZidov for adding this dataset." ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100M<n<1B #source_datasets-original #language-Bosnian #license-cc-by-sa-3.0 #region-us \n", "# Dataset Card for BsWac", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact: Nikola Ljubešič", "### Dataset Summary\n\nThe Bosnian web corpus bsWaC was built by crawling the .ba top-level domain in 2014. The corpus was near-deduplicated on paragraph level, normalised via diacritic restoration, morphosyntactically annotated and lemmatised. The corpus is shuffled by paragraphs. Each paragraph contains metadata on the URL, domain and language identification (Bosnian vs. Croatian vs. Serbian).", "### Supported Tasks and Leaderboards", "### Languages\n\nDataset is monolingual in Bosnian language.", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nDataset is under the CC-BY-SA 3.0 license.", "### Contributions\n\nThanks to @IvanZidov for adding this dataset." ]
[ 116, 9, 120, 31, 102, 10, 15, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 19, 18 ]
[ "passage: TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100M<n<1B #source_datasets-original #language-Bosnian #license-cc-by-sa-3.0 #region-us \n# Dataset Card for BsWac## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact: Nikola Ljubešič### Dataset Summary\n\nThe Bosnian web corpus bsWaC was built by crawling the .ba top-level domain in 2014. The corpus was near-deduplicated on paragraph level, normalised via diacritic restoration, morphosyntactically annotated and lemmatised. The corpus is shuffled by paragraphs. Each paragraph contains metadata on the URL, domain and language identification (Bosnian vs. Croatian vs. Serbian).### Supported Tasks and Leaderboards### Languages\n\nDataset is monolingual in Bosnian language.## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset" ]
28e91a21a22b95987a90a46cb6d7741c7aad8158
# Dataset Card for C3 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** []() - **Repository:** [link]() - **Paper:** []() - **Leaderboard:** []() - **Point of Contact:** []() ### Dataset Summary Machine reading comprehension tasks require a machine reader to answer questions relevant to the given document. In this paper, we present the first free-form multiple-Choice Chinese machine reading Comprehension dataset (C^3), containing 13,369 documents (dialogues or more formally written mixed-genre texts) and their associated 19,577 multiple-choice free-form questions collected from Chinese-as-a-second-language examinations. We present a comprehensive analysis of the prior knowledge (i.e., linguistic, domain-specific, and general world knowledge) needed for these real-world problems. We implement rule-based and popular neural methods and find that there is still a significant performance gap between the best performing model (68.5%) and human readers (96.0%), especially on problems that require prior knowledge. We further study the effects of distractor plausibility and data augmentation based on translated relevant datasets for English on model performance. We expect C^3 to present great challenges to existing systems as answering 86.8% of questions requires both knowledge within and beyond the accompanying document, and we hope that C^3 can serve as a platform to study how to leverage various kinds of prior knowledge to better understand a given written or orally oriented text. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure [More Information Needed] ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @article{sun2019investigating, title={Investigating Prior Knowledge for Challenging Chinese Machine Reading Comprehension}, author={Sun, Kai and Yu, Dian and Yu, Dong and Cardie, Claire}, journal={Transactions of the Association for Computational Linguistics}, year={2020}, url={https://arxiv.org/abs/1904.09679v3} } ``` ### Contributions Thanks to [@Narsil](https://github.com/Narsil) for adding this dataset.
c3
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:zh", "license:other", "arxiv:1904.09679", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["zh"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["multiple-choice-qa"], "paperswithcode_id": "c3", "pretty_name": "C3", "dataset_info": [{"config_name": "dialog", "features": [{"name": "documents", "sequence": "string"}, {"name": "document_id", "dtype": "string"}, {"name": "questions", "sequence": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "choice", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 2039779, "num_examples": 4885}, {"name": "test", "num_bytes": 646955, "num_examples": 1627}, {"name": "validation", "num_bytes": 611106, "num_examples": 1628}], "download_size": 2073256, "dataset_size": 3297840}, {"config_name": "mixed", "features": [{"name": "documents", "sequence": "string"}, {"name": "document_id", "dtype": "string"}, {"name": "questions", "sequence": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "choice", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 2710473, "num_examples": 3138}, {"name": "test", "num_bytes": 891579, "num_examples": 1045}, {"name": "validation", "num_bytes": 910759, "num_examples": 1046}], "download_size": 3183780, "dataset_size": 4512811}], "configs": [{"config_name": "dialog", "data_files": [{"split": "train", "path": "dialog/train-*"}, {"split": "test", "path": "dialog/test-*"}, {"split": "validation", "path": "dialog/validation-*"}]}, {"config_name": "mixed", "data_files": [{"split": "train", "path": "mixed/train-*"}, {"split": "test", "path": "mixed/test-*"}, {"split": "validation", "path": "mixed/validation-*"}]}]}
2024-01-11T08:12:46+00:00
[ "1904.09679" ]
[ "zh" ]
TAGS #task_categories-question-answering #task_ids-multiple-choice-qa #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Chinese #license-other #arxiv-1904.09679 #region-us
# Dataset Card for C3 ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: []() - Repository: [link]() - Paper: []() - Leaderboard: []() - Point of Contact: []() ### Dataset Summary Machine reading comprehension tasks require a machine reader to answer questions relevant to the given document. In this paper, we present the first free-form multiple-Choice Chinese machine reading Comprehension dataset (C^3), containing 13,369 documents (dialogues or more formally written mixed-genre texts) and their associated 19,577 multiple-choice free-form questions collected from Chinese-as-a-second-language examinations. We present a comprehensive analysis of the prior knowledge (i.e., linguistic, domain-specific, and general world knowledge) needed for these real-world problems. We implement rule-based and popular neural methods and find that there is still a significant performance gap between the best performing model (68.5%) and human readers (96.0%), especially on problems that require prior knowledge. We further study the effects of distractor plausibility and data augmentation based on translated relevant datasets for English on model performance. We expect C^3 to present great challenges to existing systems as answering 86.8% of questions requires both knowledge within and beyond the accompanying document, and we hope that C^3 can serve as a platform to study how to leverage various kinds of prior knowledge to better understand a given written or orally oriented text. ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @Narsil for adding this dataset.
[ "# Dataset Card for C3", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: []()\n- Repository: [link]()\n- Paper: []()\n- Leaderboard: []()\n- Point of Contact: []()", "### Dataset Summary\n\nMachine reading comprehension tasks require a machine reader to answer questions relevant to the given document. In this paper, we present the first free-form multiple-Choice Chinese machine reading Comprehension dataset (C^3), containing 13,369 documents (dialogues or more formally written mixed-genre texts) and their associated 19,577 multiple-choice free-form questions collected from Chinese-as-a-second-language examinations.\nWe present a comprehensive analysis of the prior knowledge (i.e., linguistic, domain-specific, and general world knowledge) needed for these real-world problems. We implement rule-based and popular neural methods and find that there is still a significant performance gap between the best performing model (68.5%) and human readers (96.0%), especially on problems that require prior knowledge. We further study the effects of distractor plausibility and data augmentation based on translated relevant datasets for English on model performance. We expect C^3 to present great challenges to existing systems as answering 86.8% of questions requires both knowledge within and beyond the accompanying document, and we hope that C^3 can serve as a platform to study how to leverage various kinds of prior knowledge to better understand a given written or orally oriented text.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\nDataset provided for research purposes only. Please check dataset license for additional information.", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @Narsil for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_ids-multiple-choice-qa #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Chinese #license-other #arxiv-1904.09679 #region-us \n", "# Dataset Card for C3", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: []()\n- Repository: [link]()\n- Paper: []()\n- Leaderboard: []()\n- Point of Contact: []()", "### Dataset Summary\n\nMachine reading comprehension tasks require a machine reader to answer questions relevant to the given document. In this paper, we present the first free-form multiple-Choice Chinese machine reading Comprehension dataset (C^3), containing 13,369 documents (dialogues or more formally written mixed-genre texts) and their associated 19,577 multiple-choice free-form questions collected from Chinese-as-a-second-language examinations.\nWe present a comprehensive analysis of the prior knowledge (i.e., linguistic, domain-specific, and general world knowledge) needed for these real-world problems. We implement rule-based and popular neural methods and find that there is still a significant performance gap between the best performing model (68.5%) and human readers (96.0%), especially on problems that require prior knowledge. We further study the effects of distractor plausibility and data augmentation based on translated relevant datasets for English on model performance. We expect C^3 to present great challenges to existing systems as answering 86.8% of questions requires both knowledge within and beyond the accompanying document, and we hope that C^3 can serve as a platform to study how to leverage various kinds of prior knowledge to better understand a given written or orally oriented text.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\nDataset provided for research purposes only. Please check dataset license for additional information.", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @Narsil for adding this dataset." ]
[ 101, 7, 120, 45, 283, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 25, 5, 6, 6, 16 ]
[ "passage: TAGS\n#task_categories-question-answering #task_ids-multiple-choice-qa #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Chinese #license-other #arxiv-1904.09679 #region-us \n# Dataset Card for C3## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: []()\n- Repository: [link]()\n- Paper: []()\n- Leaderboard: []()\n- Point of Contact: []()" ]
a1a1ed1cb21664e5050c01cf19fa4f7c525bf2f3
# Dataset Card for C4 ## Table of Contents - [Dataset Card for C4](#dataset-card-for-c4) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://huggingface.co/datasets/allenai/c4 - **Paper:** https://arxiv.org/abs/1910.10683 ### Dataset Summary A colossal, cleaned version of Common Crawl's web crawl corpus. Based on Common Crawl dataset: "https://commoncrawl.org". This is the version prepared by AllenAI, hosted at this address: https://huggingface.co/datasets/allenai/c4 It comes in four variants: - `en`: 305GB in JSON format - `en.noblocklist`: 380GB in JSON format - `en.noclean`: 2.3TB in JSON format - `realnewslike`: 15GB in JSON format The `en.noblocklist` variant is exactly the same as the `en` variant, except we turned off the so-called "badwords filter", which removes all documents that contain words from the lists at https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words. ### Supported Tasks and Leaderboards C4 is mainly intended to pretrain language models and word representations. ### Languages The dataset is in English. ## Dataset Structure ### Data Instances An example form the `en` config is: ``` { 'url': 'https://klyq.com/beginners-bbq-class-taking-place-in-missoula/', 'text': 'Beginners BBQ Class Taking Place in Missoula!\nDo you want to get better at making delicious BBQ? You will have the opportunity, put this on your calendar now. Thursday, September 22nd join World Class BBQ Champion, Tony Balay from Lonestar Smoke Rangers. He will be teaching a beginner level class for everyone who wants to get better with their culinary skills.\nHe will teach you everything you need to know to compete in a KCBS BBQ competition, including techniques, recipes, timelines, meat selection and trimming, plus smoker and fire information.\nThe cost to be in the class is $35 per person, and for spectators it is free. Included in the cost will be either a t-shirt or apron and you will be tasting samples of each meat that is prepared.', 'timestamp': '2019-04-25T12:57:54Z' } ``` ### Data Fields The data have several fields: - `url`: url of the source as a string - `text`: text content as a string - `timestamp`: timestamp as a string ### Data Splits | name | train |validation| |----------------|--------:|---------:| | en |364868892| 364608| | en.noblocklist |393391519| 393226| | en.noclean | ?| ?| | realnewslike | 13799838| 13863| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization C4 dataset is a collection of about 750GB of English-language text sourced from the public Common Crawl web scrape. It includes heuristics to extract only natural language (as opposed to boilerplate and other gibberish) in addition to extensive deduplication. You can find the code that has been used to build this dataset in [c4.py](https://github.com/tensorflow/datasets/blob/5952d3d60d60e1727786fa7a9a23d24bb463d4d6/tensorflow_datasets/text/c4.py) by Tensorflow Datasets. The dataset was explicitly designed to be English only: any page that was not given a probability of at least 99% of being English by [langdetect](https://github.com/Mimino666/langdetect) was discarded. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information AllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset. ### Citation Information ``` @article{2019t5, author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu}, title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer}, journal = {arXiv e-prints}, year = {2019}, archivePrefix = {arXiv}, eprint = {1910.10683}, } ``` ### Contributions Thanks to [@dirkgr](https://github.com/dirkgr) and [@lhoestq](https://github.com/lhoestq) for adding this dataset.
c4
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:multilingual", "size_categories:100M<n<1B", "source_datasets:original", "language:en", "license:odc-by", "arxiv:1910.10683", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["odc-by"], "multilinguality": ["multilingual"], "size_categories": ["100M<n<1B"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "paperswithcode_id": "c4", "pretty_name": "C4", "dataset_info": [{"config_name": "en", "features": [{"name": "text", "dtype": "string"}, {"name": "timestamp", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 828589180707, "num_examples": 364868892}, {"name": "validation", "num_bytes": 825767266, "num_examples": 364608}], "download_size": 326778635540, "dataset_size": 1657178361414}, {"config_name": "en.noblocklist", "features": [{"name": "text", "dtype": "string"}, {"name": "timestamp", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1029628201361, "num_examples": 393391519}, {"name": "validation", "num_bytes": 1025606012, "num_examples": 393226}], "download_size": 406611392434, "dataset_size": 2059256402722}, {"config_name": "realnewslike", "features": [{"name": "text", "dtype": "string"}, {"name": "timestamp", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 38165657946, "num_examples": 13799838}, {"name": "validation", "num_bytes": 37875873, "num_examples": 13863}], "download_size": 15419740744, "dataset_size": 76331315892}, {"config_name": "en.noclean", "features": [{"name": "text", "dtype": "string"}, {"name": "timestamp", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6715509699938, "num_examples": 1063805381}, {"name": "validation", "num_bytes": 6706356913, "num_examples": 1065029}], "download_size": 2430376268625, "dataset_size": 6722216056851}]}
2024-01-18T11:02:07+00:00
[ "1910.10683" ]
[ "en" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-100M<n<1B #source_datasets-original #language-English #license-odc-by #arxiv-1910.10683 #region-us
Dataset Card for C4 =================== Table of Contents ----------------- * Dataset Card for C4 + Table of Contents + Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages + Dataset Structure - Data Instances - Data Fields - Data Splits + Dataset Creation - Curation Rationale - Source Data * Initial Data Collection and Normalization * Who are the source language producers? - Annotations * Annotation process * Who are the annotators? - Personal and Sensitive Information + Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations + Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions Dataset Description ------------------- * Homepage: URL * Paper: URL ### Dataset Summary A colossal, cleaned version of Common Crawl's web crawl corpus. Based on Common Crawl dataset: "URL". This is the version prepared by AllenAI, hosted at this address: URL It comes in four variants: * 'en': 305GB in JSON format * 'en.noblocklist': 380GB in JSON format * 'en.noclean': 2.3TB in JSON format * 'realnewslike': 15GB in JSON format The 'en.noblocklist' variant is exactly the same as the 'en' variant, except we turned off the so-called "badwords filter", which removes all documents that contain words from the lists at URL ### Supported Tasks and Leaderboards C4 is mainly intended to pretrain language models and word representations. ### Languages The dataset is in English. Dataset Structure ----------------- ### Data Instances An example form the 'en' config is: ### Data Fields The data have several fields: * 'url': url of the source as a string * 'text': text content as a string * 'timestamp': timestamp as a string ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization C4 dataset is a collection of about 750GB of English-language text sourced from the public Common Crawl web scrape. It includes heuristics to extract only natural language (as opposed to boilerplate and other gibberish) in addition to extensive deduplication. You can find the code that has been used to build this dataset in URL by Tensorflow Datasets. The dataset was explicitly designed to be English only: any page that was not given a probability of at least 99% of being English by langdetect was discarded. #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information AllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset. ### Contributions Thanks to @dirkgr and @lhoestq for adding this dataset.
[ "### Dataset Summary\n\n\nA colossal, cleaned version of Common Crawl's web crawl corpus. Based on Common Crawl dataset: \"URL\".\n\n\nThis is the version prepared by AllenAI, hosted at this address: URL\n\n\nIt comes in four variants:\n\n\n* 'en': 305GB in JSON format\n* 'en.noblocklist': 380GB in JSON format\n* 'en.noclean': 2.3TB in JSON format\n* 'realnewslike': 15GB in JSON format\n\n\nThe 'en.noblocklist' variant is exactly the same as the 'en' variant, except we turned off the so-called \"badwords filter\", which removes all documents that contain words from the lists at URL", "### Supported Tasks and Leaderboards\n\n\nC4 is mainly intended to pretrain language models and word representations.", "### Languages\n\n\nThe dataset is in English.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example form the 'en' config is:", "### Data Fields\n\n\nThe data have several fields:\n\n\n* 'url': url of the source as a string\n* 'text': text content as a string\n* 'timestamp': timestamp as a string", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nC4 dataset is a collection of about 750GB of English-language text sourced from the public Common Crawl web scrape. It includes heuristics to extract only natural language (as opposed to boilerplate and other gibberish) in addition to extensive deduplication. You can find the code that has been used to build this dataset in URL by Tensorflow Datasets.\n\n\nThe dataset was explicitly designed to be English only: any page that was not given a probability of at least 99% of being English by langdetect was discarded.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nAllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.", "### Contributions\n\n\nThanks to @dirkgr and @lhoestq for adding this dataset." ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-100M<n<1B #source_datasets-original #language-English #license-odc-by #arxiv-1910.10683 #region-us \n", "### Dataset Summary\n\n\nA colossal, cleaned version of Common Crawl's web crawl corpus. Based on Common Crawl dataset: \"URL\".\n\n\nThis is the version prepared by AllenAI, hosted at this address: URL\n\n\nIt comes in four variants:\n\n\n* 'en': 305GB in JSON format\n* 'en.noblocklist': 380GB in JSON format\n* 'en.noclean': 2.3TB in JSON format\n* 'realnewslike': 15GB in JSON format\n\n\nThe 'en.noblocklist' variant is exactly the same as the 'en' variant, except we turned off the so-called \"badwords filter\", which removes all documents that contain words from the lists at URL", "### Supported Tasks and Leaderboards\n\n\nC4 is mainly intended to pretrain language models and word representations.", "### Languages\n\n\nThe dataset is in English.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example form the 'en' config is:", "### Data Fields\n\n\nThe data have several fields:\n\n\n* 'url': url of the source as a string\n* 'text': text content as a string\n* 'timestamp': timestamp as a string", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nC4 dataset is a collection of about 750GB of English-language text sourced from the public Common Crawl web scrape. It includes heuristics to extract only natural language (as opposed to boilerplate and other gibberish) in addition to extensive deduplication. You can find the code that has been used to build this dataset in URL by Tensorflow Datasets.\n\n\nThe dataset was explicitly designed to be English only: any page that was not given a probability of at least 99% of being English by langdetect was discarded.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nAllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.", "### Contributions\n\n\nThanks to @dirkgr and @lhoestq for adding this dataset." ]
[ 121, 161, 26, 18, 17, 48, 11, 7, 4, 133, 10, 5, 5, 9, 18, 7, 8, 14, 6, 53, 22 ]
[ "passage: TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-100M<n<1B #source_datasets-original #language-English #license-odc-by #arxiv-1910.10683 #region-us \n### Dataset Summary\n\n\nA colossal, cleaned version of Common Crawl's web crawl corpus. Based on Common Crawl dataset: \"URL\".\n\n\nThis is the version prepared by AllenAI, hosted at this address: URL\n\n\nIt comes in four variants:\n\n\n* 'en': 305GB in JSON format\n* 'en.noblocklist': 380GB in JSON format\n* 'en.noclean': 2.3TB in JSON format\n* 'realnewslike': 15GB in JSON format\n\n\nThe 'en.noblocklist' variant is exactly the same as the 'en' variant, except we turned off the so-called \"badwords filter\", which removes all documents that contain words from the lists at URL### Supported Tasks and Leaderboards\n\n\nC4 is mainly intended to pretrain language models and word representations.### Languages\n\n\nThe dataset is in English.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nAn example form the 'en' config is:### Data Fields\n\n\nThe data have several fields:\n\n\n* 'url': url of the source as a string\n* 'text': text content as a string\n* 'timestamp': timestamp as a string### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data" ]
775098da3ba75f033781f8061900b62503e9bea0
--- # Dataset Card for CAIL 2018 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Github](https://github.com/thunlp/CAIL/blob/master/README_en.md) - **Repository:** [Github](https://github.com/thunlp/CAIL) - **Paper:** [Arxiv](https://arxiv.org/abs/1807.02478) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@JetRunner](https://github.com/JetRunner) for adding this dataset.
cail2018
[ "task_categories:other", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:zh", "license:unknown", "judgement-prediction", "arxiv:1807.02478", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["zh"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["other"], "task_ids": [], "paperswithcode_id": "chinese-ai-and-law-cail-2018", "pretty_name": "CAIL 2018", "tags": ["judgement-prediction"], "dataset_info": {"features": [{"name": "fact", "dtype": "string"}, {"name": "relevant_articles", "sequence": "int32"}, {"name": "accusation", "sequence": "string"}, {"name": "punish_of_money", "dtype": "float32"}, {"name": "criminals", "sequence": "string"}, {"name": "death_penalty", "dtype": "bool"}, {"name": "imprisonment", "dtype": "float32"}, {"name": "life_imprisonment", "dtype": "bool"}], "splits": [{"name": "exercise_contest_train", "num_bytes": 220112348, "num_examples": 154592}, {"name": "exercise_contest_valid", "num_bytes": 21702109, "num_examples": 17131}, {"name": "exercise_contest_test", "num_bytes": 41057538, "num_examples": 32508}, {"name": "first_stage_train", "num_bytes": 1779653382, "num_examples": 1710856}, {"name": "first_stage_test", "num_bytes": 244334666, "num_examples": 217016}, {"name": "final_test", "num_bytes": 44194611, "num_examples": 35922}], "download_size": 1167828091, "dataset_size": 2351054654}, "configs": [{"config_name": "default", "data_files": [{"split": "exercise_contest_train", "path": "data/exercise_contest_train-*"}, {"split": "exercise_contest_valid", "path": "data/exercise_contest_valid-*"}, {"split": "exercise_contest_test", "path": "data/exercise_contest_test-*"}, {"split": "first_stage_train", "path": "data/first_stage_train-*"}, {"split": "first_stage_test", "path": "data/first_stage_test-*"}, {"split": "final_test", "path": "data/final_test-*"}]}]}
2024-01-16T15:08:12+00:00
[ "1807.02478" ]
[ "zh" ]
TAGS #task_categories-other #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-Chinese #license-unknown #judgement-prediction #arxiv-1807.02478 #region-us
--- # Dataset Card for CAIL 2018 ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: Github - Repository: Github - Paper: Arxiv - Leaderboard: - Point of Contact: ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @JetRunner for adding this dataset.
[ "# Dataset Card for CAIL 2018", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Github\n- Repository: Github\n- Paper: Arxiv\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @JetRunner for adding this dataset." ]
[ "TAGS\n#task_categories-other #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-Chinese #license-unknown #judgement-prediction #arxiv-1807.02478 #region-us \n", "# Dataset Card for CAIL 2018", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Github\n- Repository: Github\n- Paper: Arxiv\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @JetRunner for adding this dataset." ]
[ 88, 8, 120, 33, 6, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 18 ]
[ "passage: TAGS\n#task_categories-other #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-Chinese #license-unknown #judgement-prediction #arxiv-1807.02478 #region-us \n# Dataset Card for CAIL 2018## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: Github\n- Repository: Github\n- Paper: Arxiv\n- Leaderboard:\n- Point of Contact:### Dataset Summary### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions\n\nThanks to @JetRunner for adding this dataset." ]
4749e1d6950c2377b62a2e424147e68406cca9dd
# Dataset Card for CANER ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** [Classical-Arabic-Named-Entity-Recognition-Corpus](https://github.com/RamziSalah) - **Paper:** [Researchgate](https://www.researchgate.net/publication/330075080_BUILDING_THE_CLASSICAL_ARABIC_NAMED_ENTITY_RECOGNITION_CORPUS_CANERCORPUS) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The Classical Arabic Named Entity Recognition corpus is a new corpus of tagged data that can be useful for handling the issues in recognition of Arabic named entities. ### Supported Tasks and Leaderboards - Named Entity Recognition ### Languages Classical Arabic ## Dataset Structure ### Data Instances An example from the dataset: ``` {'ner_tag': 1, 'token': 'الجامع'} ``` Where 1 stands for "Book" ### Data Fields - `id`: id of the sample - `token`: the tokens of the example text - `ner_tag`: the NER tags of each token The NER tags correspond to this list: ``` "Allah", "Book", "Clan", "Crime", "Date", "Day", "Hell", "Loc", "Meas", "Mon", "Month", "NatOb", "Number", "O", "Org", "Para", "Pers", "Prophet", "Rlig", "Sect", "Time" ``` ### Data Splits Training splits only ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? Ramzi Salah and Lailatul Qadri Zakaria ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information [More Information Needed] ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information @article{article, author = {Salah, Ramzi and Zakaria, Lailatul}, year = {2018}, month = {12}, pages = {}, title = {BUILDING THE CLASSICAL ARABIC NAMED ENTITY RECOGNITION CORPUS (CANERCORPUS)}, volume = {96}, journal = {Journal of Theoretical and Applied Information Technology} } ### Contributions Thanks to [@KMFODA](https://github.com/KMFODA) for adding this dataset.
caner
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:ar", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["ar"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "CANER", "dataset_info": {"features": [{"name": "token", "dtype": "string"}, {"name": "ner_tag", "dtype": {"class_label": {"names": {"0": "Allah", "1": "Book", "2": "Clan", "3": "Crime", "4": "Date", "5": "Day", "6": "Hell", "7": "Loc", "8": "Meas", "9": "Mon", "10": "Month", "11": "NatOb", "12": "Number", "13": "O", "14": "Org", "15": "Para", "16": "Pers", "17": "Prophet", "18": "Rlig", "19": "Sect", "20": "Time"}}}}], "splits": [{"name": "train", "num_bytes": 5095617, "num_examples": 258240}], "download_size": 1459014, "dataset_size": 5095617}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-16T13:38:20+00:00
[]
[ "ar" ]
TAGS #task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Arabic #license-unknown #region-us
# Dataset Card for CANER ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: - Repository: Classical-Arabic-Named-Entity-Recognition-Corpus - Paper: Researchgate - Leaderboard: - Point of Contact: ### Dataset Summary The Classical Arabic Named Entity Recognition corpus is a new corpus of tagged data that can be useful for handling the issues in recognition of Arabic named entities. ### Supported Tasks and Leaderboards - Named Entity Recognition ### Languages Classical Arabic ## Dataset Structure ### Data Instances An example from the dataset: Where 1 stands for "Book" ### Data Fields - 'id': id of the sample - 'token': the tokens of the example text - 'ner_tag': the NER tags of each token The NER tags correspond to this list: ### Data Splits Training splits only ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? Ramzi Salah and Lailatul Qadri Zakaria ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information @article{article, author = {Salah, Ramzi and Zakaria, Lailatul}, year = {2018}, month = {12}, pages = {}, title = {BUILDING THE CLASSICAL ARABIC NAMED ENTITY RECOGNITION CORPUS (CANERCORPUS)}, volume = {96}, journal = {Journal of Theoretical and Applied Information Technology} } ### Contributions Thanks to @KMFODA for adding this dataset.
[ "# Dataset Card for CANER", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: \n- Repository: Classical-Arabic-Named-Entity-Recognition-Corpus\n- Paper: Researchgate\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThe Classical Arabic Named Entity Recognition corpus is a new corpus of tagged data that can be useful for handling the issues in recognition of Arabic named entities.", "### Supported Tasks and Leaderboards\n\n- Named Entity Recognition", "### Languages\n\nClassical Arabic", "## Dataset Structure", "### Data Instances\n\nAn example from the dataset:\n\nWhere 1 stands for \"Book\"", "### Data Fields\n\n- 'id': id of the sample\n - 'token': the tokens of the example text\n - 'ner_tag': the NER tags of each token\n\nThe NER tags correspond to this list:", "### Data Splits\n\nTraining splits only", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?\n\nRamzi Salah and Lailatul Qadri Zakaria", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\n\n\n\n\n@article{article,\nauthor = {Salah, Ramzi and Zakaria, Lailatul},\nyear = {2018},\nmonth = {12},\npages = {},\ntitle = {BUILDING THE CLASSICAL ARABIC NAMED ENTITY RECOGNITION CORPUS (CANERCORPUS)},\nvolume = {96},\njournal = {Journal of Theoretical and Applied Information Technology}\n}", "### Contributions\n\nThanks to @KMFODA for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Arabic #license-unknown #region-us \n", "# Dataset Card for CANER", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: \n- Repository: Classical-Arabic-Named-Entity-Recognition-Corpus\n- Paper: Researchgate\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThe Classical Arabic Named Entity Recognition corpus is a new corpus of tagged data that can be useful for handling the issues in recognition of Arabic named entities.", "### Supported Tasks and Leaderboards\n\n- Named Entity Recognition", "### Languages\n\nClassical Arabic", "## Dataset Structure", "### Data Instances\n\nAn example from the dataset:\n\nWhere 1 stands for \"Book\"", "### Data Fields\n\n- 'id': id of the sample\n - 'token': the tokens of the example text\n - 'ner_tag': the NER tags of each token\n\nThe NER tags correspond to this list:", "### Data Splits\n\nTraining splits only", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?\n\nRamzi Salah and Lailatul Qadri Zakaria", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\n\n\n\n\n@article{article,\nauthor = {Salah, Ramzi and Zakaria, Lailatul},\nyear = {2018},\nmonth = {12},\npages = {},\ntitle = {BUILDING THE CLASSICAL ARABIC NAMED ENTITY RECOGNITION CORPUS (CANERCORPUS)},\nvolume = {96},\njournal = {Journal of Theoretical and Applied Information Technology}\n}", "### Contributions\n\nThanks to @KMFODA for adding this dataset." ]
[ 97, 7, 120, 45, 42, 18, 7, 6, 21, 51, 9, 5, 7, 4, 10, 10, 5, 5, 19, 8, 8, 7, 8, 7, 5, 6, 98, 17 ]
[ "passage: TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Arabic #license-unknown #region-us \n# Dataset Card for CANER## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: \n- Repository: Classical-Arabic-Named-Entity-Recognition-Corpus\n- Paper: Researchgate\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\nThe Classical Arabic Named Entity Recognition corpus is a new corpus of tagged data that can be useful for handling the issues in recognition of Arabic named entities.### Supported Tasks and Leaderboards\n\n- Named Entity Recognition### Languages\n\nClassical Arabic## Dataset Structure### Data Instances\n\nAn example from the dataset:\n\nWhere 1 stands for \"Book\"### Data Fields\n\n- 'id': id of the sample\n - 'token': the tokens of the example text\n - 'ner_tag': the NER tags of each token\n\nThe NER tags correspond to this list:### Data Splits\n\nTraining splits only## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?\n\nRamzi Salah and Lailatul Qadri Zakaria### Personal and Sensitive Information## Considerations for Using the Data" ]
42c1ec984cc5461418a24fec2cd9ab8c8d4aa99c
# Dataset Card for CAPES ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Parallel corpus of theses and dissertation abstracts in Portuguese and English from CAPES](https://sites.google.com/view/felipe-soares/datasets) - **Repository:** - **Paper:** [A Parallel Corpus of Theses and Dissertations Abstracts](https://arxiv.org/abs/1905.01715) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary A parallel corpus of theses and dissertations abstracts in English and Portuguese were collected from the CAPES website (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior) - Brazil. The corpus is sentence aligned for all language pairs. Approximately 240,000 documents were collected and aligned using the Hunalign algorithm. ### Supported Tasks and Leaderboards The underlying task is machine translation. ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{soares2018parallel, title={A Parallel Corpus of Theses and Dissertations Abstracts}, author={Soares, Felipe and Yamashita, Gabrielli Harumi and Anzanello, Michel Jose}, booktitle={International Conference on Computational Processing of the Portuguese Language}, pages={345--352}, year={2018}, organization={Springer} } ``` ### Contributions Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
capes
[ "task_categories:translation", "annotations_creators:found", "language_creators:found", "multilinguality:multilingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "language:pt", "license:unknown", "dissertation-abstracts-translation", "theses-translation", "arxiv:1905.01715", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en", "pt"], "license": ["unknown"], "multilinguality": ["multilingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "paperswithcode_id": "capes", "pretty_name": "CAPES", "tags": ["dissertation-abstracts-translation", "theses-translation"], "dataset_info": {"config_name": "en-pt", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en", "pt"]}}}], "splits": [{"name": "train", "num_bytes": 472483436, "num_examples": 1157610}], "download_size": 285468020, "dataset_size": 472483436}, "configs": [{"config_name": "en-pt", "data_files": [{"split": "train", "path": "en-pt/train-*"}], "default": true}]}
2024-01-16T10:30:24+00:00
[ "1905.01715" ]
[ "en", "pt" ]
TAGS #task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-multilingual #size_categories-1M<n<10M #source_datasets-original #language-English #language-Portuguese #license-unknown #dissertation-abstracts-translation #theses-translation #arxiv-1905.01715 #region-us
# Dataset Card for CAPES ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: Parallel corpus of theses and dissertation abstracts in Portuguese and English from CAPES - Repository: - Paper: A Parallel Corpus of Theses and Dissertations Abstracts - Leaderboard: - Point of Contact: ### Dataset Summary A parallel corpus of theses and dissertations abstracts in English and Portuguese were collected from the CAPES website (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior) - Brazil. The corpus is sentence aligned for all language pairs. Approximately 240,000 documents were collected and aligned using the Hunalign algorithm. ### Supported Tasks and Leaderboards The underlying task is machine translation. ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @patil-suraj for adding this dataset.
[ "# Dataset Card for CAPES", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Parallel corpus of theses and dissertation abstracts in Portuguese and English from CAPES\n- Repository:\n- Paper: A Parallel Corpus of Theses and Dissertations Abstracts\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nA parallel corpus of theses and dissertations abstracts in English and Portuguese were collected from the\nCAPES website (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior) - Brazil.\nThe corpus is sentence aligned for all language pairs. Approximately 240,000 documents were\ncollected and aligned using the Hunalign algorithm.", "### Supported Tasks and Leaderboards\n\nThe underlying task is machine translation.", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @patil-suraj for adding this dataset." ]
[ "TAGS\n#task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-multilingual #size_categories-1M<n<10M #source_datasets-original #language-English #language-Portuguese #license-unknown #dissertation-abstracts-translation #theses-translation #arxiv-1905.01715 #region-us \n", "# Dataset Card for CAPES", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Parallel corpus of theses and dissertation abstracts in Portuguese and English from CAPES\n- Repository:\n- Paper: A Parallel Corpus of Theses and Dissertations Abstracts\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nA parallel corpus of theses and dissertations abstracts in English and Portuguese were collected from the\nCAPES website (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior) - Brazil.\nThe corpus is sentence aligned for all language pairs. Approximately 240,000 documents were\ncollected and aligned using the Hunalign algorithm.", "### Supported Tasks and Leaderboards\n\nThe underlying task is machine translation.", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @patil-suraj for adding this dataset." ]
[ 105, 7, 120, 55, 83, 19, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 19 ]
[ "passage: TAGS\n#task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-multilingual #size_categories-1M<n<10M #source_datasets-original #language-English #language-Portuguese #license-unknown #dissertation-abstracts-translation #theses-translation #arxiv-1905.01715 #region-us \n# Dataset Card for CAPES## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: Parallel corpus of theses and dissertation abstracts in Portuguese and English from CAPES\n- Repository:\n- Paper: A Parallel Corpus of Theses and Dissertations Abstracts\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\nA parallel corpus of theses and dissertations abstracts in English and Portuguese were collected from the\nCAPES website (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior) - Brazil.\nThe corpus is sentence aligned for all language pairs. Approximately 240,000 documents were\ncollected and aligned using the Hunalign algorithm.### Supported Tasks and Leaderboards\n\nThe underlying task is machine translation.### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases" ]
290898d2d08b6591db17005504e40ce00ac1028e
# Dataset Card for Casino ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [Github: Kushal Chawla CaSiNo](https://github.com/kushalchawla/CaSiNo) - **Paper:** [CaSiNo: A Corpus of Campsite Negotiation Dialogues for Automatic Negotiation Systems](https://aclanthology.org/2021.naacl-main.254.pdf) - **Point of Contact:** [Kushal Chawla](kchawla@usc.edu) ### Dataset Summary We provide a novel dataset (referred to as CaSiNo) of 1030 negotiation dialogues. Two participants take the role of campsite neighbors and negotiate for Food, Water, and Firewood packages, based on their individual preferences and requirements. This design keeps the task tractable, while still facilitating linguistically rich and personal conversations. This helps to overcome the limitations of prior negotiation datasets such as Deal or No Deal and Craigslist Bargain. Each dialogue consists of rich meta-data including participant demographics, personality, and their subjective evaluation of the negotiation in terms of satisfaction and opponent likeness. ### Supported Tasks and Leaderboards Train end-to-end models for negotiation ### Languages English ## Dataset Structure ### Data Instances ``` { "chat_logs": [ { "text": "Hello! \ud83d\ude42 Let's work together on a deal for these packages, shall we? What are you most interested in?", "task_data": {}, "id": "mturk_agent_1" }, ... ], "participant_info": { "mturk_agent_1": { "value2issue": ... "value2reason": ... "outcomes": ... "demographics": ... "personality": ... }, "mturk_agent_2": ... }, "annotations": [ ["Hello! \ud83d\ude42 Let's work together on a deal for these packages, shall we? What are you most interested in?", "promote-coordination,elicit-pref"], ... ] } ``` ### Data Fields - `chat_logs`: The negotiation dialogue between two participants - `text`: The dialogue utterance - `task_data`: Meta-data associated with the utterance such as the deal submitted by a participant - `id`: The ID of the participant who typed this utterance - `participant_info`: Meta-data about the two participants in this conversation - `mturk_agent_1`: For the first participant (Note that 'first' is just for reference. There is no order between the participants and any participant can start the conversation) - `value2issue`: The priority order of this participant among Food, Water, Firewood - `value2reason`: The personal arguments given by the participants themselves, consistent with the above preference order. This preference order and these arguments were submitted before the negotiation began. - `outcomes`: The negotiation outcomes for this participant including objective and subjective assessment. - `demographics`: Demographic attributes of the participant in terms of age, gender, ethnicity, and education. - `personality`: Personality attributes for this participant, in terms of Big-5 and Social Value Orientation - `mturk_agent_2`: For the second participant; follows the same structure as above - `annotations`: Strategy annotations for each utterance in the dialogue, wherever available. The first element represents the utterance and the second represents a comma-separated list of all strategies present in that utterance. ### Data Splits No default data split has been provided. Hence, all 1030 data points are under the 'train' split. | | Train | | ----- | ----- | | total dialogues | 1030 | | annotated dialogues | 396 | ## Dataset Creation ### Curation Rationale The dataset was collected to address the limitations in prior negotiation datasets from the perspective of downstream applications in pedagogy and conversational AI. Please refer to the original paper published at NAACL 2021 for details about the rationale and data curation steps ([source paper](https://aclanthology.org/2021.naacl-main.254.pdf)). ### Source Data #### Initial Data Collection and Normalization The dialogues were crowdsourced on Amazon Mechanical Turk. The strategy annotations were performed by expert annotators (first three authors of the paper). Please refer to the original dataset paper published at NAACL 2021 for more details ([source paper](https://aclanthology.org/2021.naacl-main.254.pdf)). #### Who are the source language producers? The primary producers are Turkers on Amazon Mechanical Turk platform. Two turkers were randomly paired with each other to engage in a negotiation via a chat interface. Please refer to the original dataset paper published at NAACL 2021 for more details ([source paper](https://aclanthology.org/2021.naacl-main.254.pdf)). ### Annotations #### Annotation process From the [source paper](https://aclanthology.org/2021.naacl-main.254.pdf) for this dataset: >Three expert annotators independently annotated 396 dialogues containing 4615 utterances. The annotation guidelines were iterated over a subset of 5 dialogues, while the reliability scores were computed on a different subset of 10 dialogues. We use the nominal form of Krippendorff’s alpha (Krippendorff, 2018) to measure the inter-annotator agreement. We provide the annotation statistics in Table 2. Although we release all the annotations, we skip Coordination and Empathy for our analysis in this work, due to higher subjectivity resulting in relatively lower reliability scores. #### Who are the annotators? Three expert annotators (first three authors of the paper). ### Personal and Sensitive Information All personally identifiable information about the participants such as MTurk Ids or HIT Ids was removed before releasing the data. ## Considerations for Using the Data ### Social Impact of Dataset Please refer to Section 8.2 in the [source paper](https://aclanthology.org/2021.naacl-main.254.pdf). ### Discussion of Biases Please refer to Section 8.2 in the [source paper](https://aclanthology.org/2021.naacl-main.254.pdf). ### Other Known Limitations Please refer to Section 7 in the [source paper](https://aclanthology.org/2021.naacl-main.254.pdf). ## Additional Information ### Dataset Curators Corresponding Author: Kushal Chawla (`kchawla@usc.edu`)\ Affiliation: University of Southern California\ Please refer to the [source paper](https://aclanthology.org/2021.naacl-main.254.pdf) for the complete author list. ### Licensing Information The project is licensed under CC-by-4.0 ### Citation Information ``` @inproceedings{chawla2021casino, title={CaSiNo: A Corpus of Campsite Negotiation Dialogues for Automatic Negotiation Systems}, author={Chawla, Kushal and Ramirez, Jaysa and Clever, Rene and Lucas, Gale and May, Jonathan and Gratch, Jonathan}, booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies}, pages={3167--3185}, year={2021} } ``` ### Contributions Thanks to [Kushal Chawla](https://kushalchawla.github.io/) for adding this dataset.
casino
[ "task_categories:conversational", "task_categories:text-generation", "task_categories:fill-mask", "task_ids:dialogue-modeling", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["conversational", "text-generation", "fill-mask"], "task_ids": ["dialogue-modeling"], "paperswithcode_id": "casino", "pretty_name": "Campsite Negotiation Dialogues", "dataset_info": {"features": [{"name": "chat_logs", "list": [{"name": "text", "dtype": "string"}, {"name": "task_data", "struct": [{"name": "data", "dtype": "string"}, {"name": "issue2youget", "struct": [{"name": "Firewood", "dtype": "string"}, {"name": "Water", "dtype": "string"}, {"name": "Food", "dtype": "string"}]}, {"name": "issue2theyget", "struct": [{"name": "Firewood", "dtype": "string"}, {"name": "Water", "dtype": "string"}, {"name": "Food", "dtype": "string"}]}]}, {"name": "id", "dtype": "string"}]}, {"name": "participant_info", "struct": [{"name": "mturk_agent_1", "struct": [{"name": "value2issue", "struct": [{"name": "Low", "dtype": "string"}, {"name": "Medium", "dtype": "string"}, {"name": "High", "dtype": "string"}]}, {"name": "value2reason", "struct": [{"name": "Low", "dtype": "string"}, {"name": "Medium", "dtype": "string"}, {"name": "High", "dtype": "string"}]}, {"name": "outcomes", "struct": [{"name": "points_scored", "dtype": "int32"}, {"name": "satisfaction", "dtype": "string"}, {"name": "opponent_likeness", "dtype": "string"}]}, {"name": "demographics", "struct": [{"name": "age", "dtype": "int32"}, {"name": "gender", "dtype": "string"}, {"name": "ethnicity", "dtype": "string"}, {"name": "education", "dtype": "string"}]}, {"name": "personality", "struct": [{"name": "svo", "dtype": "string"}, {"name": "big-five", "struct": [{"name": "extraversion", "dtype": "float32"}, {"name": "agreeableness", "dtype": "float32"}, {"name": "conscientiousness", "dtype": "float32"}, {"name": "emotional-stability", "dtype": "float32"}, {"name": "openness-to-experiences", "dtype": "float32"}]}]}]}, {"name": "mturk_agent_2", "struct": [{"name": "value2issue", "struct": [{"name": "Low", "dtype": "string"}, {"name": "Medium", "dtype": "string"}, {"name": "High", "dtype": "string"}]}, {"name": "value2reason", "struct": [{"name": "Low", "dtype": "string"}, {"name": "Medium", "dtype": "string"}, {"name": "High", "dtype": "string"}]}, {"name": "outcomes", "struct": [{"name": "points_scored", "dtype": "int32"}, {"name": "satisfaction", "dtype": "string"}, {"name": "opponent_likeness", "dtype": "string"}]}, {"name": "demographics", "struct": [{"name": "age", "dtype": "int32"}, {"name": "gender", "dtype": "string"}, {"name": "ethnicity", "dtype": "string"}, {"name": "education", "dtype": "string"}]}, {"name": "personality", "struct": [{"name": "svo", "dtype": "string"}, {"name": "big-five", "struct": [{"name": "extraversion", "dtype": "float32"}, {"name": "agreeableness", "dtype": "float32"}, {"name": "conscientiousness", "dtype": "float32"}, {"name": "emotional-stability", "dtype": "float32"}, {"name": "openness-to-experiences", "dtype": "float32"}]}]}]}]}, {"name": "annotations", "list": {"list": "string"}}], "splits": [{"name": "train", "num_bytes": 3211407, "num_examples": 1030}], "download_size": 1247368, "dataset_size": 3211407}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-16T13:53:39+00:00
[]
[ "en" ]
TAGS #task_categories-conversational #task_categories-text-generation #task_categories-fill-mask #task_ids-dialogue-modeling #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-4.0 #region-us
Dataset Card for Casino ======================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Repository: Github: Kushal Chawla CaSiNo * Paper: CaSiNo: A Corpus of Campsite Negotiation Dialogues for Automatic Negotiation Systems * Point of Contact: Kushal Chawla ### Dataset Summary We provide a novel dataset (referred to as CaSiNo) of 1030 negotiation dialogues. Two participants take the role of campsite neighbors and negotiate for Food, Water, and Firewood packages, based on their individual preferences and requirements. This design keeps the task tractable, while still facilitating linguistically rich and personal conversations. This helps to overcome the limitations of prior negotiation datasets such as Deal or No Deal and Craigslist Bargain. Each dialogue consists of rich meta-data including participant demographics, personality, and their subjective evaluation of the negotiation in terms of satisfaction and opponent likeness. ### Supported Tasks and Leaderboards Train end-to-end models for negotiation ### Languages English Dataset Structure ----------------- ### Data Instances ### Data Fields * 'chat\_logs': The negotiation dialogue between two participants + 'text': The dialogue utterance + 'task\_data': Meta-data associated with the utterance such as the deal submitted by a participant + 'id': The ID of the participant who typed this utterance * 'participant\_info': Meta-data about the two participants in this conversation + 'mturk\_agent\_1': For the first participant (Note that 'first' is just for reference. There is no order between the participants and any participant can start the conversation) - 'value2issue': The priority order of this participant among Food, Water, Firewood - 'value2reason': The personal arguments given by the participants themselves, consistent with the above preference order. This preference order and these arguments were submitted before the negotiation began. - 'outcomes': The negotiation outcomes for this participant including objective and subjective assessment. - 'demographics': Demographic attributes of the participant in terms of age, gender, ethnicity, and education. - 'personality': Personality attributes for this participant, in terms of Big-5 and Social Value Orientation + 'mturk\_agent\_2': For the second participant; follows the same structure as above * 'annotations': Strategy annotations for each utterance in the dialogue, wherever available. The first element represents the utterance and the second represents a comma-separated list of all strategies present in that utterance. ### Data Splits No default data split has been provided. Hence, all 1030 data points are under the 'train' split. Dataset Creation ---------------- ### Curation Rationale The dataset was collected to address the limitations in prior negotiation datasets from the perspective of downstream applications in pedagogy and conversational AI. Please refer to the original paper published at NAACL 2021 for details about the rationale and data curation steps (source paper). ### Source Data #### Initial Data Collection and Normalization The dialogues were crowdsourced on Amazon Mechanical Turk. The strategy annotations were performed by expert annotators (first three authors of the paper). Please refer to the original dataset paper published at NAACL 2021 for more details (source paper). #### Who are the source language producers? The primary producers are Turkers on Amazon Mechanical Turk platform. Two turkers were randomly paired with each other to engage in a negotiation via a chat interface. Please refer to the original dataset paper published at NAACL 2021 for more details (source paper). ### Annotations #### Annotation process From the source paper for this dataset: > > Three expert annotators independently annotated 396 dialogues containing 4615 utterances. The annotation guidelines were iterated over a subset of 5 dialogues, while the reliability scores were computed on a different subset of 10 dialogues. We use the nominal form of Krippendorff’s alpha (Krippendorff, 2018) to measure the inter-annotator agreement. We provide the annotation statistics in Table 2. Although we release all the annotations, we skip Coordination and Empathy for our analysis in this work, due to higher subjectivity resulting in relatively lower reliability scores. > > > #### Who are the annotators? Three expert annotators (first three authors of the paper). ### Personal and Sensitive Information All personally identifiable information about the participants such as MTurk Ids or HIT Ids was removed before releasing the data. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset Please refer to Section 8.2 in the source paper. ### Discussion of Biases Please refer to Section 8.2 in the source paper. ### Other Known Limitations Please refer to Section 7 in the source paper. Additional Information ---------------------- ### Dataset Curators Corresponding Author: Kushal Chawla ('kchawla@URL') Affiliation: University of Southern California Please refer to the source paper for the complete author list. ### Licensing Information The project is licensed under CC-by-4.0 ### Contributions Thanks to Kushal Chawla for adding this dataset.
[ "### Dataset Summary\n\n\nWe provide a novel dataset (referred to as CaSiNo) of 1030 negotiation dialogues. Two participants take the role of campsite neighbors and negotiate for Food, Water, and Firewood packages, based on their individual preferences and requirements. This design keeps the task tractable, while still facilitating linguistically rich and personal conversations. This helps to overcome the limitations of prior negotiation datasets such as Deal or No Deal and Craigslist Bargain. Each dialogue consists of rich meta-data including participant demographics, personality, and their subjective evaluation of the negotiation in terms of satisfaction and opponent likeness.", "### Supported Tasks and Leaderboards\n\n\nTrain end-to-end models for negotiation", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\n* 'chat\\_logs': The negotiation dialogue between two participants\n\t+ 'text': The dialogue utterance\n\t+ 'task\\_data': Meta-data associated with the utterance such as the deal submitted by a participant\n\t+ 'id': The ID of the participant who typed this utterance\n* 'participant\\_info': Meta-data about the two participants in this conversation\n\t+ 'mturk\\_agent\\_1': For the first participant (Note that 'first' is just for reference. There is no order between the participants and any participant can start the conversation)\n\t\t- 'value2issue': The priority order of this participant among Food, Water, Firewood\n\t\t- 'value2reason': The personal arguments given by the participants themselves, consistent with the above preference order. This preference order and these arguments were submitted before the negotiation began.\n\t\t- 'outcomes': The negotiation outcomes for this participant including objective and subjective assessment.\n\t\t- 'demographics': Demographic attributes of the participant in terms of age, gender, ethnicity, and education.\n\t\t- 'personality': Personality attributes for this participant, in terms of Big-5 and Social Value Orientation\n\t+ 'mturk\\_agent\\_2': For the second participant; follows the same structure as above\n* 'annotations': Strategy annotations for each utterance in the dialogue, wherever available. The first element represents the utterance and the second represents a comma-separated list of all strategies present in that utterance.", "### Data Splits\n\n\nNo default data split has been provided. Hence, all 1030 data points are under the 'train' split.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe dataset was collected to address the limitations in prior negotiation datasets from the perspective of downstream applications in pedagogy and conversational AI. Please refer to the original paper published at NAACL 2021 for details about the rationale and data curation steps (source paper).", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe dialogues were crowdsourced on Amazon Mechanical Turk. The strategy annotations were performed by expert annotators (first three authors of the paper). Please refer to the original dataset paper published at NAACL 2021 for more details (source paper).", "#### Who are the source language producers?\n\n\nThe primary producers are Turkers on Amazon Mechanical Turk platform. Two turkers were randomly paired with each other to engage in a negotiation via a chat interface. Please refer to the original dataset paper published at NAACL 2021 for more details (source paper).", "### Annotations", "#### Annotation process\n\n\nFrom the source paper for this dataset:\n\n\n\n> \n> Three expert annotators independently annotated 396 dialogues containing 4615 utterances. The annotation guidelines were iterated over a subset of 5 dialogues, while the reliability scores were computed on a different subset of 10 dialogues. We use the nominal form of Krippendorff’s alpha (Krippendorff, 2018) to measure the inter-annotator agreement. We provide the annotation statistics in Table 2. Although we release all the annotations, we skip Coordination and Empathy for our analysis in this work, due to higher subjectivity resulting in relatively lower reliability scores.\n> \n> \n>", "#### Who are the annotators?\n\n\nThree expert annotators (first three authors of the paper).", "### Personal and Sensitive Information\n\n\nAll personally identifiable information about the participants such as MTurk Ids or HIT Ids was removed before releasing the data.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nPlease refer to Section 8.2 in the source paper.", "### Discussion of Biases\n\n\nPlease refer to Section 8.2 in the source paper.", "### Other Known Limitations\n\n\nPlease refer to Section 7 in the source paper.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nCorresponding Author: Kushal Chawla ('kchawla@URL') \n\nAffiliation: University of Southern California \n\nPlease refer to the source paper for the complete author list.", "### Licensing Information\n\n\nThe project is licensed under CC-by-4.0", "### Contributions\n\n\nThanks to Kushal Chawla for adding this dataset." ]
[ "TAGS\n#task_categories-conversational #task_categories-text-generation #task_categories-fill-mask #task_ids-dialogue-modeling #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n", "### Dataset Summary\n\n\nWe provide a novel dataset (referred to as CaSiNo) of 1030 negotiation dialogues. Two participants take the role of campsite neighbors and negotiate for Food, Water, and Firewood packages, based on their individual preferences and requirements. This design keeps the task tractable, while still facilitating linguistically rich and personal conversations. This helps to overcome the limitations of prior negotiation datasets such as Deal or No Deal and Craigslist Bargain. Each dialogue consists of rich meta-data including participant demographics, personality, and their subjective evaluation of the negotiation in terms of satisfaction and opponent likeness.", "### Supported Tasks and Leaderboards\n\n\nTrain end-to-end models for negotiation", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\n* 'chat\\_logs': The negotiation dialogue between two participants\n\t+ 'text': The dialogue utterance\n\t+ 'task\\_data': Meta-data associated with the utterance such as the deal submitted by a participant\n\t+ 'id': The ID of the participant who typed this utterance\n* 'participant\\_info': Meta-data about the two participants in this conversation\n\t+ 'mturk\\_agent\\_1': For the first participant (Note that 'first' is just for reference. There is no order between the participants and any participant can start the conversation)\n\t\t- 'value2issue': The priority order of this participant among Food, Water, Firewood\n\t\t- 'value2reason': The personal arguments given by the participants themselves, consistent with the above preference order. This preference order and these arguments were submitted before the negotiation began.\n\t\t- 'outcomes': The negotiation outcomes for this participant including objective and subjective assessment.\n\t\t- 'demographics': Demographic attributes of the participant in terms of age, gender, ethnicity, and education.\n\t\t- 'personality': Personality attributes for this participant, in terms of Big-5 and Social Value Orientation\n\t+ 'mturk\\_agent\\_2': For the second participant; follows the same structure as above\n* 'annotations': Strategy annotations for each utterance in the dialogue, wherever available. The first element represents the utterance and the second represents a comma-separated list of all strategies present in that utterance.", "### Data Splits\n\n\nNo default data split has been provided. Hence, all 1030 data points are under the 'train' split.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe dataset was collected to address the limitations in prior negotiation datasets from the perspective of downstream applications in pedagogy and conversational AI. Please refer to the original paper published at NAACL 2021 for details about the rationale and data curation steps (source paper).", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe dialogues were crowdsourced on Amazon Mechanical Turk. The strategy annotations were performed by expert annotators (first three authors of the paper). Please refer to the original dataset paper published at NAACL 2021 for more details (source paper).", "#### Who are the source language producers?\n\n\nThe primary producers are Turkers on Amazon Mechanical Turk platform. Two turkers were randomly paired with each other to engage in a negotiation via a chat interface. Please refer to the original dataset paper published at NAACL 2021 for more details (source paper).", "### Annotations", "#### Annotation process\n\n\nFrom the source paper for this dataset:\n\n\n\n> \n> Three expert annotators independently annotated 396 dialogues containing 4615 utterances. The annotation guidelines were iterated over a subset of 5 dialogues, while the reliability scores were computed on a different subset of 10 dialogues. We use the nominal form of Krippendorff’s alpha (Krippendorff, 2018) to measure the inter-annotator agreement. We provide the annotation statistics in Table 2. Although we release all the annotations, we skip Coordination and Empathy for our analysis in this work, due to higher subjectivity resulting in relatively lower reliability scores.\n> \n> \n>", "#### Who are the annotators?\n\n\nThree expert annotators (first three authors of the paper).", "### Personal and Sensitive Information\n\n\nAll personally identifiable information about the participants such as MTurk Ids or HIT Ids was removed before releasing the data.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nPlease refer to Section 8.2 in the source paper.", "### Discussion of Biases\n\n\nPlease refer to Section 8.2 in the source paper.", "### Other Known Limitations\n\n\nPlease refer to Section 7 in the source paper.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nCorresponding Author: Kushal Chawla ('kchawla@URL') \n\nAffiliation: University of Southern California \n\nPlease refer to the source paper for the complete author list.", "### Licensing Information\n\n\nThe project is licensed under CC-by-4.0", "### Contributions\n\n\nThanks to Kushal Chawla for adding this dataset." ]
[ 115, 155, 21, 12, 6, 358, 36, 68, 4, 67, 68, 5, 161, 24, 48, 18, 19, 24, 46, 17, 18 ]
[ "passage: TAGS\n#task_categories-conversational #task_categories-text-generation #task_categories-fill-mask #task_ids-dialogue-modeling #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n### Dataset Summary\n\n\nWe provide a novel dataset (referred to as CaSiNo) of 1030 negotiation dialogues. Two participants take the role of campsite neighbors and negotiate for Food, Water, and Firewood packages, based on their individual preferences and requirements. This design keeps the task tractable, while still facilitating linguistically rich and personal conversations. This helps to overcome the limitations of prior negotiation datasets such as Deal or No Deal and Craigslist Bargain. Each dialogue consists of rich meta-data including participant demographics, personality, and their subjective evaluation of the negotiation in terms of satisfaction and opponent likeness.### Supported Tasks and Leaderboards\n\n\nTrain end-to-end models for negotiation### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------### Data Instances", "passage: ### Data Fields\n\n\n* 'chat\\_logs': The negotiation dialogue between two participants\n\t+ 'text': The dialogue utterance\n\t+ 'task\\_data': Meta-data associated with the utterance such as the deal submitted by a participant\n\t+ 'id': The ID of the participant who typed this utterance\n* 'participant\\_info': Meta-data about the two participants in this conversation\n\t+ 'mturk\\_agent\\_1': For the first participant (Note that 'first' is just for reference. There is no order between the participants and any participant can start the conversation)\n\t\t- 'value2issue': The priority order of this participant among Food, Water, Firewood\n\t\t- 'value2reason': The personal arguments given by the participants themselves, consistent with the above preference order. This preference order and these arguments were submitted before the negotiation began.\n\t\t- 'outcomes': The negotiation outcomes for this participant including objective and subjective assessment.\n\t\t- 'demographics': Demographic attributes of the participant in terms of age, gender, ethnicity, and education.\n\t\t- 'personality': Personality attributes for this participant, in terms of Big-5 and Social Value Orientation\n\t+ 'mturk\\_agent\\_2': For the second participant; follows the same structure as above\n* 'annotations': Strategy annotations for each utterance in the dialogue, wherever available. The first element represents the utterance and the second represents a comma-separated list of all strategies present in that utterance.### Data Splits\n\n\nNo default data split has been provided. Hence, all 1030 data points are under the 'train' split.\n\n\n\nDataset Creation\n----------------### Curation Rationale\n\n\nThe dataset was collected to address the limitations in prior negotiation datasets from the perspective of downstream applications in pedagogy and conversational AI. Please refer to the original paper published at NAACL 2021 for details about the rationale and data curation steps (source paper).### Source Data#### Initial Data Collection and Normalization\n\n\nThe dialogues were crowdsourced on Amazon Mechanical Turk. The strategy annotations were performed by expert annotators (first three authors of the paper). Please refer to the original dataset paper published at NAACL 2021 for more details (source paper).#### Who are the source language producers?\n\n\nThe primary producers are Turkers on Amazon Mechanical Turk platform. Two turkers were randomly paired with each other to engage in a negotiation via a chat interface. Please refer to the original dataset paper published at NAACL 2021 for more details (source paper).### Annotations#### Annotation process\n\n\nFrom the source paper for this dataset:\n\n\n\n> \n> Three expert annotators independently annotated 396 dialogues containing 4615 utterances. The annotation guidelines were iterated over a subset of 5 dialogues, while the reliability scores were computed on a different subset of 10 dialogues. We use the nominal form of Krippendorff’s alpha (Krippendorff, 2018) to measure the inter-annotator agreement. We provide the annotation statistics in Table 2. Although we release all the annotations, we skip Coordination and Empathy for our analysis in this work, due to higher subjectivity resulting in relatively lower reliability scores.\n> \n> \n>#### Who are the annotators?\n\n\nThree expert annotators (first three authors of the paper).### Personal and Sensitive Information\n\n\nAll personally identifiable information about the participants such as MTurk Ids or HIT Ids was removed before releasing the data.\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset\n\n\nPlease refer to Section 8.2 in the source paper." ]
cf24d44e517efa534f048e5fc5981f399ed25bee
# Dataset Card for Catalonia Independence Corpus ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/ixa-ehu/catalonia-independence-corpus - **Repository:** https://github.com/ixa-ehu/catalonia-independence-corpus - **Paper:** [Multilingual Stance Detection: The Catalonia Independence Corpus](https://www.aclweb.org/anthology/2020.lrec-1.171/) - **Leaderboard:** - **Point of Contact:** [Rodrigo Agerri](https://github.com/ragerri) (corpus creator) ### Dataset Summary This dataset contains two corpora in Spanish and Catalan that consist of annotated Twitter messages for automatic stance detection. The data was collected over 12 days during February and March of 2019 from tweets posted in Barcelona, and during September of 2018 from tweets posted in the town of Terrassa, Catalonia. Each corpus is annotated with three classes: AGAINST, FAVOR and NEUTRAL, which express the stance towards the target - independence of Catalonia. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Spanish and Catalan ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@lewtun](https://github.com/lewtun) for adding this dataset.
catalonia_independence
[ "task_categories:text-classification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:ca", "language:es", "license:cc-by-nc-sa-4.0", "stance-detection", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced"], "language": ["ca", "es"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": [], "paperswithcode_id": "cic", "pretty_name": "Catalonia Independence Corpus", "config_names": ["catalan", "spanish"], "tags": ["stance-detection"], "dataset_info": [{"config_name": "catalan", "features": [{"name": "id_str", "dtype": "string"}, {"name": "TWEET", "dtype": "string"}, {"name": "LABEL", "dtype": {"class_label": {"names": {"0": "AGAINST", "1": "FAVOR", "2": "NEUTRAL"}}}}], "splits": [{"name": "train", "num_bytes": 1406242, "num_examples": 6028}, {"name": "test", "num_bytes": 469196, "num_examples": 2010}, {"name": "validation", "num_bytes": 473385, "num_examples": 2010}], "download_size": 1638682, "dataset_size": 2348823}, {"config_name": "spanish", "features": [{"name": "id_str", "dtype": "string"}, {"name": "TWEET", "dtype": "string"}, {"name": "LABEL", "dtype": {"class_label": {"names": {"0": "AGAINST", "1": "FAVOR", "2": "NEUTRAL"}}}}], "splits": [{"name": "train", "num_bytes": 1507380, "num_examples": 6046}, {"name": "test", "num_bytes": 501775, "num_examples": 2016}, {"name": "validation", "num_bytes": 505084, "num_examples": 2015}], "download_size": 1760636, "dataset_size": 2514239}], "configs": [{"config_name": "catalan", "data_files": [{"split": "train", "path": "catalan/train-*"}, {"split": "test", "path": "catalan/test-*"}, {"split": "validation", "path": "catalan/validation-*"}], "default": true}, {"config_name": "spanish", "data_files": [{"split": "train", "path": "spanish/train-*"}, {"split": "test", "path": "spanish/test-*"}, {"split": "validation", "path": "spanish/validation-*"}]}]}
2024-01-16T13:54:09+00:00
[]
[ "ca", "es" ]
TAGS #task_categories-text-classification #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Catalan #language-Spanish #license-cc-by-nc-sa-4.0 #stance-detection #region-us
# Dataset Card for Catalonia Independence Corpus ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: Multilingual Stance Detection: The Catalonia Independence Corpus - Leaderboard: - Point of Contact: Rodrigo Agerri (corpus creator) ### Dataset Summary This dataset contains two corpora in Spanish and Catalan that consist of annotated Twitter messages for automatic stance detection. The data was collected over 12 days during February and March of 2019 from tweets posted in Barcelona, and during September of 2018 from tweets posted in the town of Terrassa, Catalonia. Each corpus is annotated with three classes: AGAINST, FAVOR and NEUTRAL, which express the stance towards the target - independence of Catalonia. ### Supported Tasks and Leaderboards ### Languages Spanish and Catalan ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @lewtun for adding this dataset.
[ "# Dataset Card for Catalonia Independence Corpus", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Multilingual Stance Detection: The Catalonia Independence Corpus\n- Leaderboard:\n- Point of Contact: Rodrigo Agerri (corpus creator)", "### Dataset Summary\n\nThis dataset contains two corpora in Spanish and Catalan that consist of annotated Twitter messages for automatic stance detection. The data was collected over 12 days during February and March of 2019 from tweets posted in Barcelona, and during September of 2018 from tweets posted in the town of Terrassa, Catalonia.\n\nEach corpus is annotated with three classes: AGAINST, FAVOR and NEUTRAL, which express the stance towards the target - independence of Catalonia.", "### Supported Tasks and Leaderboards", "### Languages\n\nSpanish and Catalan", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @lewtun for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Catalan #language-Spanish #license-cc-by-nc-sa-4.0 #stance-detection #region-us \n", "# Dataset Card for Catalonia Independence Corpus", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Multilingual Stance Detection: The Catalonia Independence Corpus\n- Leaderboard:\n- Point of Contact: Rodrigo Agerri (corpus creator)", "### Dataset Summary\n\nThis dataset contains two corpora in Spanish and Catalan that consist of annotated Twitter messages for automatic stance detection. The data was collected over 12 days during February and March of 2019 from tweets posted in Barcelona, and during September of 2018 from tweets posted in the town of Terrassa, Catalonia.\n\nEach corpus is annotated with three classes: AGAINST, FAVOR and NEUTRAL, which express the stance towards the target - independence of Catalonia.", "### Supported Tasks and Leaderboards", "### Languages\n\nSpanish and Catalan", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @lewtun for adding this dataset." ]
[ 97, 10, 120, 48, 107, 10, 7, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 16 ]
[ "passage: TAGS\n#task_categories-text-classification #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Catalan #language-Spanish #license-cc-by-nc-sa-4.0 #stance-detection #region-us \n# Dataset Card for Catalonia Independence Corpus## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Multilingual Stance Detection: The Catalonia Independence Corpus\n- Leaderboard:\n- Point of Contact: Rodrigo Agerri (corpus creator)### Dataset Summary\n\nThis dataset contains two corpora in Spanish and Catalan that consist of annotated Twitter messages for automatic stance detection. The data was collected over 12 days during February and March of 2019 from tweets posted in Barcelona, and during September of 2018 from tweets posted in the town of Terrassa, Catalonia.\n\nEach corpus is annotated with three classes: AGAINST, FAVOR and NEUTRAL, which express the stance towards the target - independence of Catalonia.### Supported Tasks and Leaderboards### Languages\n\nSpanish and Catalan## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases" ]
3f09f8235b6c80bb737fc3b0e5d10320208ae33b
# Dataset Card for Cats Vs. Dogs ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Cats vs Dogs Dataset](https://www.microsoft.com/en-us/download/details.aspx?id=54765) - **Repository:** - **Paper:** [Asirra: A CAPTCHA that Exploits Interest-Aligned Manual Image Categorization](https://www.microsoft.com/en-us/research/wp-content/uploads/2007/10/CCS2007.pdf) - **Leaderboard:** [Dogs vs. Cats](https://www.kaggle.com/competitions/dogs-vs-cats) - **Point of Contact:** ### Dataset Summary A large set of images of cats and dogs. There are 1738 corrupted images that are dropped. This dataset is part of a now-closed Kaggle competition and represents a subset of the so-called Asirra dataset. From the competition page: > The Asirra data set > > Web services are often protected with a challenge that's supposed to be easy for people to solve, but difficult for computers. Such a challenge is often called a [CAPTCHA](http://www.captcha.net/) (Completely Automated Public Turing test to tell Computers and Humans Apart) or HIP (Human Interactive Proof). HIPs are used for many purposes, such as to reduce email and blog spam and prevent brute-force attacks on web site passwords. > > Asirra (Animal Species Image Recognition for Restricting Access) is a HIP that works by asking users to identify photographs of cats and dogs. This task is difficult for computers, but studies have shown that people can accomplish it quickly and accurately. Many even think it's fun! Here is an example of the Asirra interface: > > Asirra is unique because of its partnership with [Petfinder.com](https://www.petfinder.com/), the world's largest site devoted to finding homes for homeless pets. They've provided Microsoft Research with over three million images of cats and dogs, manually classified by people at thousands of animal shelters across the United States. Kaggle is fortunate to offer a subset of this data for fun and research. ### Supported Tasks and Leaderboards - `image-classification`: The goal of this task is to classify a given image as either containing a cat or a dog. The leaderboard is available [here](https://paperswithcode.com/sota/image-classification-on-cats-vs-dogs). ### Languages English. ## Dataset Structure ### Data Instances A sample from the training set is provided below: ``` { 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x375 at 0x29CEAD71780>, 'labels': 0 } ``` ### Data Fields The data instances have the following fields: - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`. - `labels`: an `int` classification label. Class Label Mappings: ``` { "cat": 0, "dog": 1, } ``` ### Data Splits | | train | |---------------|------:| | # of examples | 23410 | ## Dataset Creation ### Curation Rationale This subset was to built to test whether computer vision algorithms can beat the Asirra CAPTCHA: From the competition page: > Image recognition attacks > > While random guessing is the easiest form of attack, various forms of image recognition can allow an attacker to make guesses that are better than random. There is enormous diversity in the photo database (a wide variety of backgrounds, angles, poses, lighting, etc.), making accurate automatic classification difficult. In an informal poll conducted many years ago, computer vision experts posited that a classifier with better than 60% accuracy would be difficult without a major advance in the state of the art. For reference, a 60% classifier improves the guessing probability of a 12-image HIP from 1/4096 to 1/459. ### Source Data #### Initial Data Collection and Normalization This dataset is a subset of the Asirra dataset. From the competition page: > Asirra is unique because of its partnership with Petfinder.com, the world's largest site devoted to finding homes for homeless pets. They've provided Microsoft Research with over three million images of cats and dogs, manually classified by people at thousands of animal shelters across the United States. #### Who are the source language producers? The users of [Petfinder.com](https://www.petfinder.com/). ### Annotations #### Annotation process The images were annotated by selecting a pet category on [Petfinder.com](https://www.petfinder.com/). #### Who are the annotators? The users of [Petfinder.com](https://www.petfinder.com/). ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases From the paper: > Unlike many image-based CAPTCHAs which are abstract or subjective, Asirra’s challenges are concrete, inoffensive (cute, by some accounts), require no specialized or culturally biased knowledge, and have definite ground truth. This makes Asirra less frustrating for humans. Some beta-testers found it fun. The four-year-old child of one asked several times to “play the cat and dog game again.” ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ```bibtex @Inproceedings (Conference){asirra-a-captcha-that-exploits-interest-aligned-manual-image-categorization, author = {Elson, Jeremy and Douceur, John (JD) and Howell, Jon and Saul, Jared}, title = {Asirra: A CAPTCHA that Exploits Interest-Aligned Manual Image Categorization}, booktitle = {Proceedings of 14th ACM Conference on Computer and Communications Security (CCS)}, year = {2007}, month = {October}, publisher = {Association for Computing Machinery, Inc.}, url = {https://www.microsoft.com/en-us/research/publication/asirra-a-captcha-that-exploits-interest-aligned-manual-image-categorization/}, edition = {Proceedings of 14th ACM Conference on Computer and Communications Security (CCS)}, } ``` ### Contributions Thanks to [@nateraw](https://github.com/nateraw) for adding this dataset.
cats_vs_dogs
[ "task_categories:image-classification", "task_ids:multi-class-image-classification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["image-classification"], "task_ids": ["multi-class-image-classification"], "paperswithcode_id": "cats-vs-dogs", "pretty_name": "Cats Vs. Dogs", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "labels", "dtype": {"class_label": {"names": {"0": "cat", "1": "dog"}}}}], "splits": [{"name": "train", "num_bytes": 3844792, "num_examples": 23410}], "download_size": 824887076, "dataset_size": 3844792}}
2024-01-16T14:26:35+00:00
[]
[ "en" ]
TAGS #task_categories-image-classification #task_ids-multi-class-image-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #region-us
Dataset Card for Cats Vs. Dogs ============================== Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: Cats vs Dogs Dataset * Repository: * Paper: Asirra: A CAPTCHA that Exploits Interest-Aligned Manual Image Categorization * Leaderboard: Dogs vs. Cats * Point of Contact: ### Dataset Summary A large set of images of cats and dogs. There are 1738 corrupted images that are dropped. This dataset is part of a now-closed Kaggle competition and represents a subset of the so-called Asirra dataset. From the competition page: > > The Asirra data set > > > Web services are often protected with a challenge that's supposed to be easy for people to solve, but difficult for computers. Such a challenge is often called a CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) or HIP (Human Interactive Proof). HIPs are used for many purposes, such as to reduce email and blog spam and prevent brute-force attacks on web site passwords. > > > Asirra (Animal Species Image Recognition for Restricting Access) is a HIP that works by asking users to identify photographs of cats and dogs. This task is difficult for computers, but studies have shown that people can accomplish it quickly and accurately. Many even think it's fun! Here is an example of the Asirra interface: > > > Asirra is unique because of its partnership with URL, the world's largest site devoted to finding homes for homeless pets. They've provided Microsoft Research with over three million images of cats and dogs, manually classified by people at thousands of animal shelters across the United States. Kaggle is fortunate to offer a subset of this data for fun and research. > > > ### Supported Tasks and Leaderboards * 'image-classification': The goal of this task is to classify a given image as either containing a cat or a dog. The leaderboard is available here. ### Languages English. Dataset Structure ----------------- ### Data Instances A sample from the training set is provided below: ### Data Fields The data instances have the following fields: * 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0]["image"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '"image"' column, *i.e.* 'dataset[0]["image"]' should always be preferred over 'dataset["image"][0]'. * 'labels': an 'int' classification label. Class Label Mappings: ### Data Splits Dataset Creation ---------------- ### Curation Rationale This subset was to built to test whether computer vision algorithms can beat the Asirra CAPTCHA: From the competition page: > > Image recognition attacks > > > While random guessing is the easiest form of attack, various forms of image recognition can allow an attacker to make guesses that are better than random. There is enormous diversity in the photo database (a wide variety of backgrounds, angles, poses, lighting, etc.), making accurate automatic classification difficult. In an informal poll conducted many years ago, computer vision experts posited that a classifier with better than 60% accuracy would be difficult without a major advance in the state of the art. For reference, a 60% classifier improves the guessing probability of a 12-image HIP from 1/4096 to 1/459. > > > ### Source Data #### Initial Data Collection and Normalization This dataset is a subset of the Asirra dataset. From the competition page: > > Asirra is unique because of its partnership with URL, the world's largest site devoted to finding homes for homeless pets. They've provided Microsoft Research with over three million images of cats and dogs, manually classified by people at thousands of animal shelters across the United States. > > > #### Who are the source language producers? The users of URL. ### Annotations #### Annotation process The images were annotated by selecting a pet category on URL. #### Who are the annotators? The users of URL. ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases From the paper: > > Unlike many image-based CAPTCHAs which are abstract or subjective, Asirra’s challenges are concrete, inoffensive (cute, by some accounts), require no specialized or culturally biased knowledge, and have definite ground truth. This > makes Asirra less frustrating for humans. Some beta-testers found it fun. The four-year-old child of one asked several times to “play the cat and dog game again.” > > > ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @nateraw for adding this dataset.
[ "### Dataset Summary\n\n\nA large set of images of cats and dogs. There are 1738 corrupted images that are dropped. This dataset is part of a now-closed Kaggle competition and represents a subset of the so-called Asirra dataset.\n\n\nFrom the competition page:\n\n\n\n> \n> The Asirra data set\n> \n> \n> Web services are often protected with a challenge that's supposed to be easy for people to solve, but difficult for computers. Such a challenge is often called a CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) or HIP (Human Interactive Proof). HIPs are used for many purposes, such as to reduce email and blog spam and prevent brute-force attacks on web site passwords.\n> \n> \n> Asirra (Animal Species Image Recognition for Restricting Access) is a HIP that works by asking users to identify photographs of cats and dogs. This task is difficult for computers, but studies have shown that people can accomplish it quickly and accurately. Many even think it's fun! Here is an example of the Asirra interface:\n> \n> \n> Asirra is unique because of its partnership with URL, the world's largest site devoted to finding homes for homeless pets. They've provided Microsoft Research with over three million images of cats and dogs, manually classified by people at thousands of animal shelters across the United States. Kaggle is fortunate to offer a subset of this data for fun and research.\n> \n> \n>", "### Supported Tasks and Leaderboards\n\n\n* 'image-classification': The goal of this task is to classify a given image as either containing a cat or a dog. The leaderboard is available here.", "### Languages\n\n\nEnglish.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from the training set is provided below:", "### Data Fields\n\n\nThe data instances have the following fields:\n\n\n* 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'.\n* 'labels': an 'int' classification label.\n\n\nClass Label Mappings:", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThis subset was to built to test whether computer vision algorithms can beat the Asirra CAPTCHA:\n\n\nFrom the competition page:\n\n\n\n> \n> Image recognition attacks\n> \n> \n> While random guessing is the easiest form of attack, various forms of image recognition can allow an attacker to make guesses that are better than random. There is enormous diversity in the photo database (a wide variety of backgrounds, angles, poses, lighting, etc.), making accurate automatic classification difficult. In an informal poll conducted many years ago, computer vision experts posited that a classifier with better than 60% accuracy would be difficult without a major advance in the state of the art. For reference, a 60% classifier improves the guessing probability of a 12-image HIP from 1/4096 to 1/459.\n> \n> \n>", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThis dataset is a subset of the Asirra dataset.\n\n\nFrom the competition page:\n\n\n\n> \n> Asirra is unique because of its partnership with URL, the world's largest site devoted to finding homes for homeless pets. They've provided Microsoft Research with over three million images of cats and dogs, manually classified by people at thousands of animal shelters across the United States.\n> \n> \n>", "#### Who are the source language producers?\n\n\nThe users of URL.", "### Annotations", "#### Annotation process\n\n\nThe images were annotated by selecting a pet category on URL.", "#### Who are the annotators?\n\n\nThe users of URL.", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases\n\n\nFrom the paper:\n\n\n\n> \n> Unlike many image-based CAPTCHAs which are abstract or subjective, Asirra’s challenges are concrete, inoffensive (cute, by some accounts), require no specialized or culturally biased knowledge, and have definite ground truth. This\n> makes Asirra less frustrating for humans. Some beta-testers found it fun. The four-year-old child of one asked several times to “play the cat and dog game again.”\n> \n> \n>", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @nateraw for adding this dataset." ]
[ "TAGS\n#task_categories-image-classification #task_ids-multi-class-image-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #region-us \n", "### Dataset Summary\n\n\nA large set of images of cats and dogs. There are 1738 corrupted images that are dropped. This dataset is part of a now-closed Kaggle competition and represents a subset of the so-called Asirra dataset.\n\n\nFrom the competition page:\n\n\n\n> \n> The Asirra data set\n> \n> \n> Web services are often protected with a challenge that's supposed to be easy for people to solve, but difficult for computers. Such a challenge is often called a CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) or HIP (Human Interactive Proof). HIPs are used for many purposes, such as to reduce email and blog spam and prevent brute-force attacks on web site passwords.\n> \n> \n> Asirra (Animal Species Image Recognition for Restricting Access) is a HIP that works by asking users to identify photographs of cats and dogs. This task is difficult for computers, but studies have shown that people can accomplish it quickly and accurately. Many even think it's fun! Here is an example of the Asirra interface:\n> \n> \n> Asirra is unique because of its partnership with URL, the world's largest site devoted to finding homes for homeless pets. They've provided Microsoft Research with over three million images of cats and dogs, manually classified by people at thousands of animal shelters across the United States. Kaggle is fortunate to offer a subset of this data for fun and research.\n> \n> \n>", "### Supported Tasks and Leaderboards\n\n\n* 'image-classification': The goal of this task is to classify a given image as either containing a cat or a dog. The leaderboard is available here.", "### Languages\n\n\nEnglish.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from the training set is provided below:", "### Data Fields\n\n\nThe data instances have the following fields:\n\n\n* 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'.\n* 'labels': an 'int' classification label.\n\n\nClass Label Mappings:", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThis subset was to built to test whether computer vision algorithms can beat the Asirra CAPTCHA:\n\n\nFrom the competition page:\n\n\n\n> \n> Image recognition attacks\n> \n> \n> While random guessing is the easiest form of attack, various forms of image recognition can allow an attacker to make guesses that are better than random. There is enormous diversity in the photo database (a wide variety of backgrounds, angles, poses, lighting, etc.), making accurate automatic classification difficult. In an informal poll conducted many years ago, computer vision experts posited that a classifier with better than 60% accuracy would be difficult without a major advance in the state of the art. For reference, a 60% classifier improves the guessing probability of a 12-image HIP from 1/4096 to 1/459.\n> \n> \n>", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThis dataset is a subset of the Asirra dataset.\n\n\nFrom the competition page:\n\n\n\n> \n> Asirra is unique because of its partnership with URL, the world's largest site devoted to finding homes for homeless pets. They've provided Microsoft Research with over three million images of cats and dogs, manually classified by people at thousands of animal shelters across the United States.\n> \n> \n>", "#### Who are the source language producers?\n\n\nThe users of URL.", "### Annotations", "#### Annotation process\n\n\nThe images were annotated by selecting a pet category on URL.", "#### Who are the annotators?\n\n\nThe users of URL.", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases\n\n\nFrom the paper:\n\n\n\n> \n> Unlike many image-based CAPTCHAs which are abstract or subjective, Asirra’s challenges are concrete, inoffensive (cute, by some accounts), require no specialized or culturally biased knowledge, and have definite ground truth. This\n> makes Asirra less frustrating for humans. Some beta-testers found it fun. The four-year-old child of one asked several times to “play the cat and dog game again.”\n> \n> \n>", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @nateraw for adding this dataset." ]
[ 94, 343, 47, 13, 16, 161, 11, 185, 4, 98, 15, 5, 20, 14, 18, 7, 112, 14, 6, 6, 17 ]
[ "passage: TAGS\n#task_categories-image-classification #task_ids-multi-class-image-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #region-us \n### Dataset Summary\n\n\nA large set of images of cats and dogs. There are 1738 corrupted images that are dropped. This dataset is part of a now-closed Kaggle competition and represents a subset of the so-called Asirra dataset.\n\n\nFrom the competition page:\n\n\n\n> \n> The Asirra data set\n> \n> \n> Web services are often protected with a challenge that's supposed to be easy for people to solve, but difficult for computers. Such a challenge is often called a CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) or HIP (Human Interactive Proof). HIPs are used for many purposes, such as to reduce email and blog spam and prevent brute-force attacks on web site passwords.\n> \n> \n> Asirra (Animal Species Image Recognition for Restricting Access) is a HIP that works by asking users to identify photographs of cats and dogs. This task is difficult for computers, but studies have shown that people can accomplish it quickly and accurately. Many even think it's fun! Here is an example of the Asirra interface:\n> \n> \n> Asirra is unique because of its partnership with URL, the world's largest site devoted to finding homes for homeless pets. They've provided Microsoft Research with over three million images of cats and dogs, manually classified by people at thousands of animal shelters across the United States. Kaggle is fortunate to offer a subset of this data for fun and research.\n> \n> \n>### Supported Tasks and Leaderboards\n\n\n* 'image-classification': The goal of this task is to classify a given image as either containing a cat or a dog. The leaderboard is available here.### Languages\n\n\nEnglish.\n\n\nDataset Structure\n-----------------", "passage: ### Data Instances\n\n\nA sample from the training set is provided below:### Data Fields\n\n\nThe data instances have the following fields:\n\n\n* 'image': A 'PIL.Image.Image' object containing the image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'.\n* 'labels': an 'int' classification label.\n\n\nClass Label Mappings:### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale\n\n\nThis subset was to built to test whether computer vision algorithms can beat the Asirra CAPTCHA:\n\n\nFrom the competition page:\n\n\n\n> \n> Image recognition attacks\n> \n> \n> While random guessing is the easiest form of attack, various forms of image recognition can allow an attacker to make guesses that are better than random. There is enormous diversity in the photo database (a wide variety of backgrounds, angles, poses, lighting, etc.), making accurate automatic classification difficult. In an informal poll conducted many years ago, computer vision experts posited that a classifier with better than 60% accuracy would be difficult without a major advance in the state of the art. For reference, a 60% classifier improves the guessing probability of a 12-image HIP from 1/4096 to 1/459.\n> \n> \n>### Source Data#### Initial Data Collection and Normalization\n\n\nThis dataset is a subset of the Asirra dataset.\n\n\nFrom the competition page:\n\n\n\n> \n> Asirra is unique because of its partnership with URL, the world's largest site devoted to finding homes for homeless pets. They've provided Microsoft Research with over three million images of cats and dogs, manually classified by people at thousands of animal shelters across the United States.\n> \n> \n>#### Who are the source language producers?\n\n\nThe users of URL.### Annotations#### Annotation process\n\n\nThe images were annotated by selecting a pet category on URL." ]
7dc6be007333a09f1b5d2474508c43d18551859d
# Dataset Card for caWaC ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://nlp.ffzg.hr/resources/corpora/cawac/ - **Repository:** http://nlp.ffzg.hr/data/corpora/cawac.uniq.sortr.gz - **Paper:** http://www.lrec-conf.org/proceedings/lrec2014/pdf/841_Paper.pdf - **Leaderboard:** - **Point of Contact:** [Nikola Ljubešič](mailto:nikola.ljubesic@ffzg.hr) ### Dataset Summary caWaC is a 780-million-token web corpus of Catalan built from the .cat top-level-domain in late 2013. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Dataset is monolingual in Catalan language. ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Dataset is under the [CC-BY-SA 3.0](http://creativecommons.org/licenses/by-sa/3.0/) license. ### Citation Information ``` @inproceedings{DBLP:conf/lrec/LjubesicT14, author = {Nikola Ljubesic and Antonio Toral}, editor = {Nicoletta Calzolari and Khalid Choukri and Thierry Declerck and Hrafn Loftsson and Bente Maegaard and Joseph Mariani and Asunci{\'{o}}n Moreno and Jan Odijk and Stelios Piperidis}, title = {caWaC - {A} web corpus of Catalan and its application to language modeling and machine translation}, booktitle = {Proceedings of the Ninth International Conference on Language Resources and Evaluation, {LREC} 2014, Reykjavik, Iceland, May 26-31, 2014}, pages = {1728--1732}, publisher = {European Language Resources Association {(ELRA)}}, year = {2014}, url = {http://www.lrec-conf.org/proceedings/lrec2014/summaries/841.html}, timestamp = {Mon, 19 Aug 2019 15:23:35 +0200}, biburl = {https://dblp.org/rec/conf/lrec/LjubesicT14.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ### Contributions Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
cawac
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:10M<n<100M", "source_datasets:original", "language:ca", "license:cc-by-sa-3.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["ca"], "license": ["cc-by-sa-3.0"], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "paperswithcode_id": "cawac", "pretty_name": "caWaC", "dataset_info": {"features": [{"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3987228544, "num_examples": 24745986}], "download_size": 2835862485, "dataset_size": 3987228544}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-16T15:50:41+00:00
[]
[ "ca" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-original #language-Catalan #license-cc-by-sa-3.0 #region-us
# Dataset Card for caWaC ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL - Leaderboard: - Point of Contact: Nikola Ljubešič ### Dataset Summary caWaC is a 780-million-token web corpus of Catalan built from the .cat top-level-domain in late 2013. ### Supported Tasks and Leaderboards ### Languages Dataset is monolingual in Catalan language. ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information Dataset is under the CC-BY-SA 3.0 license. ### Contributions Thanks to @albertvillanova for adding this dataset.
[ "# Dataset Card for caWaC", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact: Nikola Ljubešič", "### Dataset Summary\n\ncaWaC is a 780-million-token web corpus of Catalan built from the .cat top-level-domain in late 2013.", "### Supported Tasks and Leaderboards", "### Languages\n\nDataset is monolingual in Catalan language.", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nDataset is under the CC-BY-SA 3.0 license.", "### Contributions\n\nThanks to @albertvillanova for adding this dataset." ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-original #language-Catalan #license-cc-by-sa-3.0 #region-us \n", "# Dataset Card for caWaC", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact: Nikola Ljubešič", "### Dataset Summary\n\ncaWaC is a 780-million-token web corpus of Catalan built from the .cat top-level-domain in late 2013.", "### Supported Tasks and Leaderboards", "### Languages\n\nDataset is monolingual in Catalan language.", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nDataset is under the CC-BY-SA 3.0 license.", "### Contributions\n\nThanks to @albertvillanova for adding this dataset." ]
[ 116, 8, 120, 31, 38, 10, 14, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 19, 18 ]
[ "passage: TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-original #language-Catalan #license-cc-by-sa-3.0 #region-us \n# Dataset Card for caWaC## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard:\n- Point of Contact: Nikola Ljubešič### Dataset Summary\n\ncaWaC is a 780-million-token web corpus of Catalan built from the .cat top-level-domain in late 2013.### Supported Tasks and Leaderboards### Languages\n\nDataset is monolingual in Catalan language.## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information\n\nDataset is under the CC-BY-SA 3.0 license.### Contributions\n\nThanks to @albertvillanova for adding this dataset." ]
72b5c46b1248e3316360f0f2f0b2c39e773b68e4
# Dataset Card for CBT ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:**[The bAbI project](https://research.fb.com/downloads/babi/) - **Repository:** - **Paper:** [arXiv Paper](https://arxiv.org/pdf/1511.02301.pdf) - **Leaderboard:** - **Point of Contact:** [Felix Hill](mailto:felix.hill@cl.cam.ac.uk) or [Antoine Bordes](mailto:abordes@fb.com). ### Dataset Summary The Children’s Book Test (CBT) is designed to measure directly how well language models can exploit wider linguistic context. The CBT is built from books that are freely available. This dataset contains four different configurations: - `V`: where the answers to the questions are verbs. - `P`: where the answers to the questions are pronouns. - `NE`: where the answers to the questions are named entities. - `CN`: where the answers to the questions are common nouns. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The data is present in English language as written by authors Lucy Maud Montgomery, Charles Dickens,Andrew Lang, etc. in story books for children. ## Dataset Structure ### Data Instances An instance from the `V` config: ``` {'answer': 'said', 'options': ['christening', 'existed', 'hear', 'knows', 'read', 'remarked', 'said', 'sitting', 'talking', 'wearing'], 'question': "`` They are very kind old ladies in their way , '' XXXXX the king ; `` and were nice to me when I was a boy . ''", 'sentences': ['This vexed the king even more than the queen , who was very clever and learned , and who had hated dolls when she was a child .', 'However , she , too in spite of all the books she read and all the pictures she painted , would have been glad enough to be the mother of a little prince .', 'The king was anxious to consult the fairies , but the queen would not hear of such a thing .', 'She did not believe in fairies : she said that they had never existed ; and that she maintained , though The History of the Royal Family was full of chapters about nothing else .', 'Well , at long and at last they had a little boy , who was generally regarded as the finest baby that had ever been seen .', 'Even her majesty herself remarked that , though she could never believe all the courtiers told her , yet he certainly was a fine child -- a very fine child .', 'Now , the time drew near for the christening party , and the king and queen were sitting at breakfast in their summer parlour talking over it .', 'It was a splendid room , hung with portraits of the royal ancestors .', 'There was Cinderella , the grandmother of the reigning monarch , with her little foot in her glass slipper thrust out before her .', 'There was the Marquis de Carabas , who , as everyone knows , was raised to the throne as prince consort after his marriage with the daughter of the king of the period .', 'On the arm of the throne was seated his celebrated cat , wearing boots .', 'There , too , was a portrait of a beautiful lady , sound asleep : this was Madame La Belle au Bois-dormant , also an ancestress of the royal family .', 'Many other pictures of celebrated persons were hanging on the walls .', "`` You have asked all the right people , my dear ? ''", 'said the king .', "`` Everyone who should be asked , '' answered the queen .", "`` People are so touchy on these occasions , '' said his majesty .", "`` You have not forgotten any of our aunts ? ''", "`` No ; the old cats ! ''", "replied the queen ; for the king 's aunts were old-fashioned , and did not approve of her , and she knew it ."]} ``` ### Data Fields For the `raw` config, the data fields are: - `title`: a `string` feature containing the title of the book present in the dataset. - `content`: a `string` feature containing the content of the book present in the dataset. For all other configs, the data fields are: - `sentences`: a `list` of `string` features containing 20 sentences from a book. - `question`: a `string` feature containing a question with blank marked as `XXXX` which is to be filled with one of the options. - `answer`: a `string` feature containing the answer. - `options`: a `list` of `string` features containing the options for the question. ### Data Splits The splits and corresponding sizes are: | |train |test |validation| |:--|------:|----:|---------:| |raw|98 |5 |5 | |V |105825 |2500 |2000 | |P |334030 |2500 |2000 | |CN |120769 |2500 |2000 | |NE |108719 |2500 |2000 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? Children's Book Authors ### Annotations #### Annotation process From the [homepage](https://research.fb.com/downloads/babi/): >After allocating books to either training, validation or test sets, we formed example ‘questions’ from chapters in the book by enumerating 21 consecutive sentences. In each question, the first 20 sentences form the context, and a word is removed from the 21st sentence, which becomes the query. Models must identify the answer word among a selection of 10 candidate answers appearing in the context sentences and the query. For finer-grained analyses, we evaluated four classes of question by removing distinct types of word: Named Entities, (Common) Nouns, Verbs and Prepositions. #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information ``` GNU Free Documentation License v1.3 ``` ### Citation Information ``` @misc{hill2016goldilocks, title={The Goldilocks Principle: Reading Children's Books with Explicit Memory Representations}, author={Felix Hill and Antoine Bordes and Sumit Chopra and Jason Weston}, year={2016}, eprint={1511.02301}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@gchhablani](https://github.com/gchhablani) for adding this dataset.
cbt
[ "task_categories:other", "task_categories:question-answering", "task_ids:multiple-choice-qa", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "size_categories:n<1K", "source_datasets:original", "language:en", "license:gfdl", "arxiv:1511.02301", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": ["en"], "license": ["gfdl"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M", "n<1K"], "source_datasets": ["original"], "task_categories": ["other", "question-answering"], "task_ids": ["multiple-choice-qa"], "paperswithcode_id": "cbt", "pretty_name": "Children\u2019s Book Test (CBT)", "config_names": ["CN", "NE", "P", "V", "raw"], "dataset_info": [{"config_name": "CN", "features": [{"name": "sentences", "sequence": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "options", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 301730151, "num_examples": 120769}, {"name": "test", "num_bytes": 6138376, "num_examples": 2500}, {"name": "validation", "num_bytes": 4737257, "num_examples": 2000}], "download_size": 31615166, "dataset_size": 312605784}, {"config_name": "NE", "features": [{"name": "sentences", "sequence": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "options", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 253551931, "num_examples": 108719}, {"name": "test", "num_bytes": 5707734, "num_examples": 2500}, {"name": "validation", "num_bytes": 4424316, "num_examples": 2000}], "download_size": 29693075, "dataset_size": 263683981}, {"config_name": "P", "features": [{"name": "sentences", "sequence": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "options", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 852852601, "num_examples": 334030}, {"name": "test", "num_bytes": 6078048, "num_examples": 2500}, {"name": "validation", "num_bytes": 4776981, "num_examples": 2000}], "download_size": 43825356, "dataset_size": 863707630}, {"config_name": "V", "features": [{"name": "sentences", "sequence": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "options", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 252177649, "num_examples": 105825}, {"name": "test", "num_bytes": 5806625, "num_examples": 2500}, {"name": "validation", "num_bytes": 4556425, "num_examples": 2000}], "download_size": 29992082, "dataset_size": 262540699}, {"config_name": "raw", "features": [{"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 25741580, "num_examples": 98}, {"name": "test", "num_bytes": 1528704, "num_examples": 5}, {"name": "validation", "num_bytes": 1182657, "num_examples": 5}], "download_size": 16350790, "dataset_size": 28452941}], "configs": [{"config_name": "CN", "data_files": [{"split": "train", "path": "CN/train-*"}, {"split": "test", "path": "CN/test-*"}, {"split": "validation", "path": "CN/validation-*"}]}, {"config_name": "NE", "data_files": [{"split": "train", "path": "NE/train-*"}, {"split": "test", "path": "NE/test-*"}, {"split": "validation", "path": "NE/validation-*"}]}, {"config_name": "P", "data_files": [{"split": "train", "path": "P/train-*"}, {"split": "test", "path": "P/test-*"}, {"split": "validation", "path": "P/validation-*"}]}, {"config_name": "V", "data_files": [{"split": "train", "path": "V/train-*"}, {"split": "test", "path": "V/test-*"}, {"split": "validation", "path": "V/validation-*"}]}, {"config_name": "raw", "data_files": [{"split": "train", "path": "raw/train-*"}, {"split": "test", "path": "raw/test-*"}, {"split": "validation", "path": "raw/validation-*"}]}]}
2024-01-16T16:01:16+00:00
[ "1511.02301" ]
[ "en" ]
TAGS #task_categories-other #task_categories-question-answering #task_ids-multiple-choice-qa #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #size_categories-n<1K #source_datasets-original #language-English #license-gfdl #arxiv-1511.02301 #region-us
Dataset Card for CBT ==================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage:The bAbI project * Repository: * Paper: arXiv Paper * Leaderboard: * Point of Contact: Felix Hill or Antoine Bordes. ### Dataset Summary The Children’s Book Test (CBT) is designed to measure directly how well language models can exploit wider linguistic context. The CBT is built from books that are freely available. This dataset contains four different configurations: * 'V': where the answers to the questions are verbs. * 'P': where the answers to the questions are pronouns. * 'NE': where the answers to the questions are named entities. * 'CN': where the answers to the questions are common nouns. ### Supported Tasks and Leaderboards ### Languages The data is present in English language as written by authors Lucy Maud Montgomery, Charles Dickens,Andrew Lang, etc. in story books for children. Dataset Structure ----------------- ### Data Instances An instance from the 'V' config: ### Data Fields For the 'raw' config, the data fields are: * 'title': a 'string' feature containing the title of the book present in the dataset. * 'content': a 'string' feature containing the content of the book present in the dataset. For all other configs, the data fields are: * 'sentences': a 'list' of 'string' features containing 20 sentences from a book. * 'question': a 'string' feature containing a question with blank marked as 'XXXX' which is to be filled with one of the options. * 'answer': a 'string' feature containing the answer. * 'options': a 'list' of 'string' features containing the options for the question. ### Data Splits The splits and corresponding sizes are: Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? Children's Book Authors ### Annotations #### Annotation process From the homepage: > > After allocating books to either training, validation or test sets, we formed example ‘questions’ from chapters in the book by enumerating 21 consecutive sentences. In each question, the first 20 sentences form the context, and a word is removed from the 21st sentence, which becomes the query. Models must identify the answer word among a selection of 10 candidate answers appearing in the context sentences and the query. For finer-grained analyses, we evaluated four classes of question by removing distinct types of word: Named Entities, (Common) Nouns, Verbs and Prepositions. > > > #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @gchhablani for adding this dataset.
[ "### Dataset Summary\n\n\nThe Children’s Book Test (CBT) is designed to measure directly how well language models can exploit wider linguistic context. The CBT is built from books that are freely available.\n\n\nThis dataset contains four different configurations:\n\n\n* 'V': where the answers to the questions are verbs.\n* 'P': where the answers to the questions are pronouns.\n* 'NE': where the answers to the questions are named entities.\n* 'CN': where the answers to the questions are common nouns.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe data is present in English language as written by authors Lucy Maud Montgomery, Charles Dickens,Andrew Lang, etc. in story books for children.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn instance from the 'V' config:", "### Data Fields\n\n\nFor the 'raw' config, the data fields are:\n\n\n* 'title': a 'string' feature containing the title of the book present in the dataset.\n* 'content': a 'string' feature containing the content of the book present in the dataset.\n\n\nFor all other configs, the data fields are:\n\n\n* 'sentences': a 'list' of 'string' features containing 20 sentences from a book.\n* 'question': a 'string' feature containing a question with blank marked as 'XXXX' which is to be filled with one of the options.\n* 'answer': a 'string' feature containing the answer.\n* 'options': a 'list' of 'string' features containing the options for the question.", "### Data Splits\n\n\nThe splits and corresponding sizes are:\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?\n\n\nChildren's Book Authors", "### Annotations", "#### Annotation process\n\n\nFrom the homepage:\n\n\n\n> \n> After allocating books to either training, validation or test sets, we formed example ‘questions’ from chapters in the book by enumerating 21 consecutive sentences. In each question, the first 20 sentences form the context, and a word is removed from the 21st sentence, which becomes the query. Models must identify the answer word among a selection of 10 candidate answers appearing in the context sentences and the query. For finer-grained analyses, we evaluated four classes of question by removing distinct types of word: Named Entities, (Common) Nouns, Verbs and Prepositions.\n> \n> \n>", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @gchhablani for adding this dataset." ]
[ "TAGS\n#task_categories-other #task_categories-question-answering #task_ids-multiple-choice-qa #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #size_categories-n<1K #source_datasets-original #language-English #license-gfdl #arxiv-1511.02301 #region-us \n", "### Dataset Summary\n\n\nThe Children’s Book Test (CBT) is designed to measure directly how well language models can exploit wider linguistic context. The CBT is built from books that are freely available.\n\n\nThis dataset contains four different configurations:\n\n\n* 'V': where the answers to the questions are verbs.\n* 'P': where the answers to the questions are pronouns.\n* 'NE': where the answers to the questions are named entities.\n* 'CN': where the answers to the questions are common nouns.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe data is present in English language as written by authors Lucy Maud Montgomery, Charles Dickens,Andrew Lang, etc. in story books for children.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn instance from the 'V' config:", "### Data Fields\n\n\nFor the 'raw' config, the data fields are:\n\n\n* 'title': a 'string' feature containing the title of the book present in the dataset.\n* 'content': a 'string' feature containing the content of the book present in the dataset.\n\n\nFor all other configs, the data fields are:\n\n\n* 'sentences': a 'list' of 'string' features containing 20 sentences from a book.\n* 'question': a 'string' feature containing a question with blank marked as 'XXXX' which is to be filled with one of the options.\n* 'answer': a 'string' feature containing the answer.\n* 'options': a 'list' of 'string' features containing the options for the question.", "### Data Splits\n\n\nThe splits and corresponding sizes are:\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?\n\n\nChildren's Book Authors", "### Annotations", "#### Annotation process\n\n\nFrom the homepage:\n\n\n\n> \n> After allocating books to either training, validation or test sets, we formed example ‘questions’ from chapters in the book by enumerating 21 consecutive sentences. In each question, the first 20 sentences form the context, and a word is removed from the 21st sentence, which becomes the query. Models must identify the answer word among a selection of 10 candidate answers appearing in the context sentences and the query. For finer-grained analyses, we evaluated four classes of question by removing distinct types of word: Named Entities, (Common) Nouns, Verbs and Prepositions.\n> \n> \n>", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @gchhablani for adding this dataset." ]
[ 118, 123, 10, 46, 16, 174, 21, 7, 4, 10, 16, 5, 157, 9, 18, 7, 8, 14, 6, 6, 18 ]
[ "passage: TAGS\n#task_categories-other #task_categories-question-answering #task_ids-multiple-choice-qa #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #size_categories-n<1K #source_datasets-original #language-English #license-gfdl #arxiv-1511.02301 #region-us \n### Dataset Summary\n\n\nThe Children’s Book Test (CBT) is designed to measure directly how well language models can exploit wider linguistic context. The CBT is built from books that are freely available.\n\n\nThis dataset contains four different configurations:\n\n\n* 'V': where the answers to the questions are verbs.\n* 'P': where the answers to the questions are pronouns.\n* 'NE': where the answers to the questions are named entities.\n* 'CN': where the answers to the questions are common nouns.### Supported Tasks and Leaderboards### Languages\n\n\nThe data is present in English language as written by authors Lucy Maud Montgomery, Charles Dickens,Andrew Lang, etc. in story books for children.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nAn instance from the 'V' config:### Data Fields\n\n\nFor the 'raw' config, the data fields are:\n\n\n* 'title': a 'string' feature containing the title of the book present in the dataset.\n* 'content': a 'string' feature containing the content of the book present in the dataset.\n\n\nFor all other configs, the data fields are:\n\n\n* 'sentences': a 'list' of 'string' features containing 20 sentences from a book.\n* 'question': a 'string' feature containing a question with blank marked as 'XXXX' which is to be filled with one of the options.\n* 'answer': a 'string' feature containing the answer.\n* 'options': a 'list' of 'string' features containing the options for the question." ]
fc308e7f37a9c8b693b7a5cce99c2679c57af320
# Dataset Card for CC100 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://data.statmt.org/cc-100/ - **Repository:** None - **Paper:** https://www.aclweb.org/anthology/2020.acl-main.747.pdf, https://www.aclweb.org/anthology/2020.lrec-1.494.pdf - **Leaderboard:** [More Information Needed] - **Point of Contact:** [More Information Needed] ### Dataset Summary This corpus is an attempt to recreate the dataset used for training XLM-R. This corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages (indicated by *_rom). This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. ### Supported Tasks and Leaderboards CC-100 is mainly inteded to pretrain language models and word represantations. ### Languages To load a language which isn't part of the config, all you need to do is specify the language code in the config. You can find the valid languages in Homepage section of Dataset Description: https://data.statmt.org/cc-100/ E.g. `dataset = load_dataset("cc100", lang="en")` ## Dataset Structure ### Data Instances An example from the `am` configuration: ``` {'id': '0', 'text': 'ተለዋዋጭ የግድግዳ አንግል ሙቅ አንቀሳቅሷል ቲ-አሞሌ አጥቅሼ ...\n'} ``` Each data point is a paragraph of text. The paragraphs are presented in the original (unshuffled) order. Documents are separated by a data point consisting of a single newline character. ### Data Fields The data fields are: - id: id of the example - text: content as a string ### Data Splits Sizes of some configurations: | name |train| |----------|----:| |am|3124561| |sr|35747957| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data [More Information Needed] #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? The data comes from multiple web pages in a large variety of languages. ### Annotations The dataset does not contain any additional annotations. #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information Being constructed from Common Crawl, personal and sensitive information might be present. This **must** be considered before training deep learning models with CC-100, specially in the case of text-generation models. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was prepared by [Statistical Machine Translation at the University of Edinburgh](https://www.statmt.org/ued/) using the [CC-Net](https://github.com/facebookresearch/cc_net) toolkit by Facebook Research. ### Licensing Information Statistical Machine Translation at the University of Edinburgh makes no claims of intellectual property on the work of preparation of the corpus. By using this, you are also bound by the [Common Crawl terms of use](https://commoncrawl.org/terms-of-use/) in respect of the content contained in the dataset. ### Citation Information ```bibtex @inproceedings{conneau-etal-2020-unsupervised, title = "Unsupervised Cross-lingual Representation Learning at Scale", author = "Conneau, Alexis and Khandelwal, Kartikay and Goyal, Naman and Chaudhary, Vishrav and Wenzek, Guillaume and Guzm{\'a}n, Francisco and Grave, Edouard and Ott, Myle and Zettlemoyer, Luke and Stoyanov, Veselin", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.acl-main.747", doi = "10.18653/v1/2020.acl-main.747", pages = "8440--8451", abstract = "This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +14.6{\%} average accuracy on XNLI, +13{\%} average F1 score on MLQA, and +2.4{\%} F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 15.7{\%} in XNLI accuracy for Swahili and 11.4{\%} for Urdu over previous XLM models. We also present a detailed empirical analysis of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per-language performance; XLM-R is very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We will make our code and models publicly available.", } ``` ```bibtex @inproceedings{wenzek-etal-2020-ccnet, title = "{CCN}et: Extracting High Quality Monolingual Datasets from Web Crawl Data", author = "Wenzek, Guillaume and Lachaux, Marie-Anne and Conneau, Alexis and Chaudhary, Vishrav and Guzm{\'a}n, Francisco and Joulin, Armand and Grave, Edouard", booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference", month = may, year = "2020", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://www.aclweb.org/anthology/2020.lrec-1.494", pages = "4003--4012", abstract = "Pre-training text representations have led to significant improvements in many areas of natural language processing. The quality of these models benefits greatly from the size of the pretraining corpora as long as its quality is preserved. In this paper, we describe an automatic pipeline to extract massive high-quality monolingual datasets from Common Crawl for a variety of languages. Our pipeline follows the data processing introduced in fastText (Mikolov et al., 2017; Grave et al., 2018), that deduplicates documents and identifies their language. We augment this pipeline with a filtering step to select documents that are close to high quality corpora like Wikipedia.", language = "English", ISBN = "979-10-95546-34-4", } ``` ### Contributions Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
cc100
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:multilingual", "size_categories:10M<n<100M", "size_categories:1M<n<10M", "source_datasets:original", "language:af", "language:am", "language:ar", "language:as", "language:az", "language:be", "language:bg", "language:bn", "language:br", "language:bs", "language:ca", "language:cs", "language:cy", "language:da", "language:de", "language:el", "language:en", "language:eo", "language:es", "language:et", "language:eu", "language:fa", "language:ff", "language:fi", "language:fr", "language:fy", "language:ga", "language:gd", "language:gl", "language:gn", "language:gu", "language:ha", "language:he", "language:hi", "language:hr", "language:ht", "language:hu", "language:hy", "language:id", "language:ig", "language:is", "language:it", "language:ja", "language:jv", "language:ka", "language:kk", "language:km", "language:kn", "language:ko", "language:ku", "language:ky", "language:la", "language:lg", "language:li", "language:ln", "language:lo", "language:lt", "language:lv", "language:mg", "language:mk", "language:ml", "language:mn", "language:mr", "language:ms", "language:my", "language:ne", "language:nl", "language:no", "language:ns", "language:om", "language:or", "language:pa", "language:pl", "language:ps", "language:pt", "language:qu", "language:rm", "language:ro", "language:ru", "language:sa", "language:sc", "language:sd", "language:si", "language:sk", "language:sl", "language:so", "language:sq", "language:sr", "language:ss", "language:su", "language:sv", "language:sw", "language:ta", "language:te", "language:th", "language:tl", "language:tn", "language:tr", "language:ug", "language:uk", "language:ur", "language:uz", "language:vi", "language:wo", "language:xh", "language:yi", "language:yo", "language:zh", "language:zu", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "ff", "fi", "fr", "fy", "ga", "gd", "gl", "gn", "gu", "ha", "he", "hi", "hr", "ht", "hu", "hy", "id", "ig", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lg", "li", "ln", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "ns", "om", "or", "pa", "pl", "ps", "pt", "qu", "rm", "ro", "ru", "sa", "sc", "sd", "si", "sk", "sl", "so", "sq", "sr", "ss", "su", "sv", "sw", "ta", "te", "th", "tl", "tn", "tr", "ug", "uk", "ur", "uz", "vi", "wo", "xh", "yi", "yo", "zh", "zu"], "license": ["unknown"], "multilinguality": ["multilingual"], "size_categories": ["10M<n<100M", "1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "paperswithcode_id": "cc100", "pretty_name": "CC100", "config_names": ["am", "sr"], "language_bcp47": ["bn-Latn", "hi-Latn", "my-x-zawgyi", "ta-Latn", "te-Latn", "ur-Latn", "zh-Hans", "zh-Hant"], "dataset_info": [{"config_name": "am", "features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 935440775, "num_examples": 3124561}], "download_size": 138821056, "dataset_size": 935440775}, {"config_name": "sr", "features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10299427460, "num_examples": 35747957}], "download_size": 1578989320, "dataset_size": 10299427460}, {"config_name": "ka", "features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10228918845, "num_examples": 31708119}], "download_size": 1100446372, "dataset_size": 10228918845}]}
2024-01-18T11:02:10+00:00
[]
[ "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "ff", "fi", "fr", "fy", "ga", "gd", "gl", "gn", "gu", "ha", "he", "hi", "hr", "ht", "hu", "hy", "id", "ig", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lg", "li", "ln", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "ns", "om", "or", "pa", "pl", "ps", "pt", "qu", "rm", "ro", "ru", "sa", "sc", "sd", "si", "sk", "sl", "so", "sq", "sr", "ss", "su", "sv", "sw", "ta", "te", "th", "tl", "tn", "tr", "ug", "uk", "ur", "uz", "vi", "wo", "xh", "yi", "yo", "zh", "zu" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-10M<n<100M #size_categories-1M<n<10M #source_datasets-original #language-Afrikaans #language-Amharic #language-Arabic #language-Assamese #language-Azerbaijani #language-Belarusian #language-Bulgarian #language-Bengali #language-Breton #language-Bosnian #language-Catalan #language-Czech #language-Welsh #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Persian #language-Fulah #language-Finnish #language-French #language-Western Frisian #language-Irish #language-Scottish Gaelic #language-Galician #language-Guarani #language-Gujarati #language-Hausa #language-Hebrew #language-Hindi #language-Croatian #language-Haitian #language-Hungarian #language-Armenian #language-Indonesian #language-Igbo #language-Icelandic #language-Italian #language-Japanese #language-Javanese #language-Georgian #language-Kazakh #language-Khmer #language-Kannada #language-Korean #language-Kurdish #language-Kirghiz #language-Latin #language-Ganda #language-Limburgan #language-Lingala #language-Lao #language-Lithuanian #language-Latvian #language-Malagasy #language-Macedonian #language-Malayalam #language-Mongolian #language-Marathi #language-Malay (macrolanguage) #language-Burmese #language-Nepali (macrolanguage) #language-Dutch #language-Norwegian #language-ns #language-Oromo #language-Oriya (macrolanguage) #language-Panjabi #language-Polish #language-Pushto #language-Portuguese #language-Quechua #language-Romansh #language-Romanian #language-Russian #language-Sanskrit #language-Sardinian #language-Sindhi #language-Sinhala #language-Slovak #language-Slovenian #language-Somali #language-Albanian #language-Serbian #language-Swati #language-Sundanese #language-Swedish #language-Swahili (macrolanguage) #language-Tamil #language-Telugu #language-Thai #language-Tagalog #language-Tswana #language-Turkish #language-Uighur #language-Ukrainian #language-Urdu #language-Uzbek #language-Vietnamese #language-Wolof #language-Xhosa #language-Yiddish #language-Yoruba #language-Chinese #language-Zulu #license-unknown #region-us
Dataset Card for CC100 ====================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: None * Paper: URL URL * Leaderboard: * Point of Contact: ### Dataset Summary This corpus is an attempt to recreate the dataset used for training XLM-R. This corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages (indicated by \*\_rom). This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. ### Supported Tasks and Leaderboards CC-100 is mainly inteded to pretrain language models and word represantations. ### Languages To load a language which isn't part of the config, all you need to do is specify the language code in the config. You can find the valid languages in Homepage section of Dataset Description: URL E.g. 'dataset = load\_dataset("cc100", lang="en")' Dataset Structure ----------------- ### Data Instances An example from the 'am' configuration: Each data point is a paragraph of text. The paragraphs are presented in the original (unshuffled) order. Documents are separated by a data point consisting of a single newline character. ### Data Fields The data fields are: * id: id of the example * text: content as a string ### Data Splits Sizes of some configurations: Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? The data comes from multiple web pages in a large variety of languages. ### Annotations The dataset does not contain any additional annotations. #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information Being constructed from Common Crawl, personal and sensitive information might be present. This must be considered before training deep learning models with CC-100, specially in the case of text-generation models. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators This dataset was prepared by Statistical Machine Translation at the University of Edinburgh using the CC-Net toolkit by Facebook Research. ### Licensing Information Statistical Machine Translation at the University of Edinburgh makes no claims of intellectual property on the work of preparation of the corpus. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset. ### Contributions Thanks to @abhishekkrthakur for adding this dataset.
[ "### Dataset Summary\n\n\nThis corpus is an attempt to recreate the dataset used for training XLM-R. This corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages (indicated by \\*\\_rom). This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots.", "### Supported Tasks and Leaderboards\n\n\nCC-100 is mainly inteded to pretrain language models and word represantations.", "### Languages\n\n\nTo load a language which isn't part of the config, all you need to do is specify the language code in the config.\nYou can find the valid languages in Homepage section of Dataset Description: URL\nE.g.\n\n\n'dataset = load\\_dataset(\"cc100\", lang=\"en\")'\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example from the 'am' configuration:\n\n\nEach data point is a paragraph of text. The paragraphs are presented in the original (unshuffled) order. Documents are separated by a data point consisting of a single newline character.", "### Data Fields\n\n\nThe data fields are:\n\n\n* id: id of the example\n* text: content as a string", "### Data Splits\n\n\nSizes of some configurations:\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?\n\n\nThe data comes from multiple web pages in a large variety of languages.", "### Annotations\n\n\nThe dataset does not contain any additional annotations.", "#### Annotation process\n\n\n[N/A]", "#### Who are the annotators?\n\n\n[N/A]", "### Personal and Sensitive Information\n\n\nBeing constructed from Common Crawl, personal and sensitive information might be present. This must be considered before training deep learning models with CC-100, specially in the case of text-generation models.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThis dataset was prepared by Statistical Machine Translation at the University of Edinburgh using the CC-Net toolkit by Facebook Research.", "### Licensing Information\n\n\nStatistical Machine Translation at the University of Edinburgh makes no claims of intellectual property on the work of preparation of the corpus. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.", "### Contributions\n\n\nThanks to @abhishekkrthakur for adding this dataset." ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-10M<n<100M #size_categories-1M<n<10M #source_datasets-original #language-Afrikaans #language-Amharic #language-Arabic #language-Assamese #language-Azerbaijani #language-Belarusian #language-Bulgarian #language-Bengali #language-Breton #language-Bosnian #language-Catalan #language-Czech #language-Welsh #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Persian #language-Fulah #language-Finnish #language-French #language-Western Frisian #language-Irish #language-Scottish Gaelic #language-Galician #language-Guarani #language-Gujarati #language-Hausa #language-Hebrew #language-Hindi #language-Croatian #language-Haitian #language-Hungarian #language-Armenian #language-Indonesian #language-Igbo #language-Icelandic #language-Italian #language-Japanese #language-Javanese #language-Georgian #language-Kazakh #language-Khmer #language-Kannada #language-Korean #language-Kurdish #language-Kirghiz #language-Latin #language-Ganda #language-Limburgan #language-Lingala #language-Lao #language-Lithuanian #language-Latvian #language-Malagasy #language-Macedonian #language-Malayalam #language-Mongolian #language-Marathi #language-Malay (macrolanguage) #language-Burmese #language-Nepali (macrolanguage) #language-Dutch #language-Norwegian #language-ns #language-Oromo #language-Oriya (macrolanguage) #language-Panjabi #language-Polish #language-Pushto #language-Portuguese #language-Quechua #language-Romansh #language-Romanian #language-Russian #language-Sanskrit #language-Sardinian #language-Sindhi #language-Sinhala #language-Slovak #language-Slovenian #language-Somali #language-Albanian #language-Serbian #language-Swati #language-Sundanese #language-Swedish #language-Swahili (macrolanguage) #language-Tamil #language-Telugu #language-Thai #language-Tagalog #language-Tswana #language-Turkish #language-Uighur #language-Ukrainian #language-Urdu #language-Uzbek #language-Vietnamese #language-Wolof #language-Xhosa #language-Yiddish #language-Yoruba #language-Chinese #language-Zulu #license-unknown #region-us \n", "### Dataset Summary\n\n\nThis corpus is an attempt to recreate the dataset used for training XLM-R. This corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages (indicated by \\*\\_rom). This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots.", "### Supported Tasks and Leaderboards\n\n\nCC-100 is mainly inteded to pretrain language models and word represantations.", "### Languages\n\n\nTo load a language which isn't part of the config, all you need to do is specify the language code in the config.\nYou can find the valid languages in Homepage section of Dataset Description: URL\nE.g.\n\n\n'dataset = load\\_dataset(\"cc100\", lang=\"en\")'\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example from the 'am' configuration:\n\n\nEach data point is a paragraph of text. The paragraphs are presented in the original (unshuffled) order. Documents are separated by a data point consisting of a single newline character.", "### Data Fields\n\n\nThe data fields are:\n\n\n* id: id of the example\n* text: content as a string", "### Data Splits\n\n\nSizes of some configurations:\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?\n\n\nThe data comes from multiple web pages in a large variety of languages.", "### Annotations\n\n\nThe dataset does not contain any additional annotations.", "#### Annotation process\n\n\n[N/A]", "#### Who are the annotators?\n\n\n[N/A]", "### Personal and Sensitive Information\n\n\nBeing constructed from Common Crawl, personal and sensitive information might be present. This must be considered before training deep learning models with CC-100, specially in the case of text-generation models.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThis dataset was prepared by Statistical Machine Translation at the University of Edinburgh using the CC-Net toolkit by Facebook Research.", "### Licensing Information\n\n\nStatistical Machine Translation at the University of Edinburgh makes no claims of intellectual property on the work of preparation of the corpus. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.", "### Contributions\n\n\nThanks to @abhishekkrthakur for adding this dataset." ]
[ 745, 96, 28, 80, 59, 25, 18, 7, 4, 10, 25, 17, 10, 14, 60, 7, 8, 14, 32, 60, 20 ]
[ "passage: ", "passage: TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-10M<n<100M #size_categories-1M<n<10M #source_datasets-original #language-Afrikaans #language-Amharic #language-Arabic #language-Assamese #language-Azerbaijani #language-Belarusian #language-Bulgarian #language-Bengali #language-Breton #language-Bosnian #language-Catalan #language-Czech #language-Welsh #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Persian #language-Fulah #language-Finnish #language-French #language-Western Frisian #language-Irish #language-Scottish Gaelic #language-Galician #language-Guarani #language-Gujarati #language-Hausa #language-Hebrew #language-Hindi #language-Croatian #language-Haitian #language-Hungarian #language-Armenian #language-Indonesian #language-Igbo #language-Icelandic #language-Italian #language-Japanese #language-Javanese #language-Georgian #language-Kazakh #language-Khmer #language-Kannada #language-Korean #language-Kurdish #language-Kirghiz #language-Latin #language-Ganda #language-Limburgan #language-Lingala #language-Lao #language-Lithuanian #language-Latvian #language-Malagasy #language-Macedonian #language-Malayalam #language-Mongolian #language-Marathi #language-Malay (macrolanguage) #language-Burmese #language-Nepali (macrolanguage) #language-Dutch #language-Norwegian #language-ns #language-Oromo #language-Oriya (macrolanguage) #language-Panjabi #language-Polish #language-Pushto #language-Portuguese #language-Quechua #language-Romansh #language-Romanian #language-Russian #language-Sanskrit #language-Sardinian #language-Sindhi #language-Sinhala #language-Slovak #language-Slovenian #language-Somali #language-Albanian #language-Serbian #language-Swati #language-Sundanese #language-Swedish #language-Swahili (macrolanguage) #language-Tamil #language-Telugu #language-Thai #language-Tagalog #language-Tswana #language-Turkish #language-Uighur #language-Ukrainian #language-Urdu #language-Uzbek #language-Vietnamese #language-Wolof #language-Xhosa #language-Yiddish #language-Yoruba #language-Chinese #language-Zulu #license-unknown #region-us \n### Dataset Summary\n\n\nThis corpus is an attempt to recreate the dataset used for training XLM-R. This corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages (indicated by \\*\\_rom). This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots.### Supported Tasks and Leaderboards\n\n\nCC-100 is mainly inteded to pretrain language models and word represantations.### Languages\n\n\nTo load a language which isn't part of the config, all you need to do is specify the language code in the config.\nYou can find the valid languages in Homepage section of Dataset Description: URL\nE.g.\n\n\n'dataset = load\\_dataset(\"cc100\", lang=\"en\")'\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nAn example from the 'am' configuration:\n\n\nEach data point is a paragraph of text. The paragraphs are presented in the original (unshuffled) order. Documents are separated by a data point consisting of a single newline character.### Data Fields\n\n\nThe data fields are:\n\n\n* id: id of the example\n* text: content as a string### Data Splits\n\n\nSizes of some configurations:\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?\n\n\nThe data comes from multiple web pages in a large variety of languages.### Annotations\n\n\nThe dataset does not contain any additional annotations.#### Annotation process\n\n\n[N/A]#### Who are the annotators?\n\n\n[N/A]### Personal and Sensitive Information\n\n\nBeing constructed from Common Crawl, personal and sensitive information might be present. This must be considered before training deep learning models with CC-100, specially in the case of text-generation models.\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------" ]
81eb2ce0d2a9dad6ad16b68ef750ec290880fa36
# Dataset Card for CC-News ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [CC-News homepage](https://commoncrawl.org/2016/10/news-dataset-available/) - **Point of Contact:** [Vladimir Blagojevic](mailto:dovlex@gmail.com) ### Dataset Summary CC-News dataset contains news articles from news sites all over the world. The data is available on AWS S3 in the Common Crawl bucket at /crawl-data/CC-NEWS/. This version of the dataset has been prepared using [news-please](https://github.com/fhamborg/news-please) - an integrated web crawler and information extractor for news. It contains 708241 English language news articles published between Jan 2017 and December 2019. It represents a small portion of the English language subset of the CC-News dataset. ### Supported Tasks and Leaderboards CC-News has been mostly used for language model training. ### Languages The text in the dataset is in the English language. ## Dataset Structure ### Data Instances Dataset instance contains an article itself and the relevant article fields. An example from the Cc-New train set looks as follows: ``` { 'date': '2017-08-14 00:00:00', 'description': '"The spirit of Green Day has always been about rising above oppression."', 'domain': '1041jackfm.cbslocal.com', 'image_url': 'https://cbs1041jackfm.files.wordpress.com/2017/08/billie-joe-armstrong-theo-wargo-getty-images.jpg?w=946', 'text': 'By Abby Hassler\nGreen Day’s Billie Joe Armstrong has always been outspoken about his political beliefs. Following the tragedy in Charlottesville, Virgina, over the weekend, Armstrong felt the need to speak out against the white supremacists who caused much of the violence.\nRelated: Billie Joe Armstrong Wins #TBT with Childhood Studio Photo\n“My heart feels heavy. I feel like what happened in Charlottesville goes beyond the point of anger,” Armstrong wrote on Facebook. “It makes me sad and desperate. shocked. I f—— hate racism more than anything.”\n“The spirit of Green Day has always been about rising above oppression. and sticking up for what you believe in and singing it at the top of your lungs,” Armstrong continued. “We grew up fearing nuclear holocaust because of the cold war. those days are feeling way too relevant these days. these issues are our ugly past.. and now it’s coming to haunt us. always resist these doomsday politicians. and in the words of our punk forefathers .. Nazi punks f— off.”', 'title': 'Green Day’s Billie Joe Armstrong Rails Against White Nationalists', 'url': 'http://1041jackfm.cbslocal.com/2017/08/14/billie-joe-armstrong-white-nationalists/' } ``` ### Data Fields - `date`: date of publication - `description`: description or a summary of the article - `domain`: source domain of the article (i.e. www.nytimes.com) - `image_url`: URL of the article's image - `text`: the actual article text in raw form - `title`: title of the article - `url`: article URL, the original URL where it was scraped. ### Data Splits CC-News dataset has only the training set, i.e. it has to be loaded with `train` split specified: `cc_news = load_dataset('cc_news', split="train")` ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization CC-News dataset has been proposed, created, and maintained by Sebastian Nagel. The data is publicly available on AWS S3 Common Crawl bucket at /crawl-data/CC-NEWS/. This version of the dataset has been prepared using [news-please](https://github.com/fhamborg/news-please) - an integrated web crawler and information extractor for news. It contains 708241 English language news articles published between Jan 2017 and December 2019. Although news-please tags each news article with an appropriate language tag, these tags are somewhat unreliable. To strictly isolate English language articles an additional check has been performed using [Spacy langdetect pipeline](https://spacy.io/universe/project/spacy-langdetect). We selected articles with text fields scores of 80% probability or more of being English. There are no strict guarantees that each article has all the relevant fields. For example, 527595 articles have a valid description field. All articles have what appears to be a valid image URL, but they have not been verified. #### Who are the source language producers? The news websites throughout the World. ### Annotations #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information As one can imagine, data contains contemporary public figures or individuals who appeared in the news. ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to help language model researchers develop better language models. ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @InProceedings{Hamborg2017, author = {Hamborg, Felix and Meuschke, Norman and Breitinger, Corinna and Gipp, Bela}, title = {news-please: A Generic News Crawler and Extractor}, year = {2017}, booktitle = {Proceedings of the 15th International Symposium of Information Science}, location = {Berlin}, doi = {10.5281/zenodo.4120316}, pages = {218--223}, month = {March} } ``` ### Contributions Thanks to [@vblagoje](https://github.com/vblagoje) for adding this dataset.
cc_news
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "paperswithcode_id": "cc-news", "pretty_name": "CC-News", "dataset_info": {"config_name": "plain_text", "features": [{"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "domain", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "image_url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2016416145, "num_examples": 708241}], "download_size": 1122805586, "dataset_size": 2016416145}, "configs": [{"config_name": "plain_text", "data_files": [{"split": "train", "path": "plain_text/train-*"}], "default": true}]}
2024-01-04T06:45:02+00:00
[]
[ "en" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-unknown #region-us
# Dataset Card for CC-News ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: CC-News homepage - Point of Contact: Vladimir Blagojevic ### Dataset Summary CC-News dataset contains news articles from news sites all over the world. The data is available on AWS S3 in the Common Crawl bucket at /crawl-data/CC-NEWS/. This version of the dataset has been prepared using news-please - an integrated web crawler and information extractor for news. It contains 708241 English language news articles published between Jan 2017 and December 2019. It represents a small portion of the English language subset of the CC-News dataset. ### Supported Tasks and Leaderboards CC-News has been mostly used for language model training. ### Languages The text in the dataset is in the English language. ## Dataset Structure ### Data Instances Dataset instance contains an article itself and the relevant article fields. An example from the Cc-New train set looks as follows: ### Data Fields - 'date': date of publication - 'description': description or a summary of the article - 'domain': source domain of the article (i.e. URL) - 'image_url': URL of the article's image - 'text': the actual article text in raw form - 'title': title of the article - 'url': article URL, the original URL where it was scraped. ### Data Splits CC-News dataset has only the training set, i.e. it has to be loaded with 'train' split specified: 'cc_news = load_dataset('cc_news', split="train")' ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization CC-News dataset has been proposed, created, and maintained by Sebastian Nagel. The data is publicly available on AWS S3 Common Crawl bucket at /crawl-data/CC-NEWS/. This version of the dataset has been prepared using news-please - an integrated web crawler and information extractor for news. It contains 708241 English language news articles published between Jan 2017 and December 2019. Although news-please tags each news article with an appropriate language tag, these tags are somewhat unreliable. To strictly isolate English language articles an additional check has been performed using Spacy langdetect pipeline. We selected articles with text fields scores of 80% probability or more of being English. There are no strict guarantees that each article has all the relevant fields. For example, 527595 articles have a valid description field. All articles have what appears to be a valid image URL, but they have not been verified. #### Who are the source language producers? The news websites throughout the World. ### Annotations #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information As one can imagine, data contains contemporary public figures or individuals who appeared in the news. ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to help language model researchers develop better language models. ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @vblagoje for adding this dataset.
[ "# Dataset Card for CC-News", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: CC-News homepage\n- Point of Contact: Vladimir Blagojevic", "### Dataset Summary\n\nCC-News dataset contains news articles from news sites all over the world. The data is available on AWS S3 in the Common Crawl bucket at /crawl-data/CC-NEWS/. \nThis version of the dataset has been prepared using news-please - an integrated web crawler and information extractor for news. \nIt contains 708241 English language news articles published between Jan 2017 and December 2019. \nIt represents a small portion of the English language subset of the CC-News dataset.", "### Supported Tasks and Leaderboards\n\nCC-News has been mostly used for language model training.", "### Languages\n\nThe text in the dataset is in the English language.", "## Dataset Structure", "### Data Instances\n\nDataset instance contains an article itself and the relevant article fields.\nAn example from the Cc-New train set looks as follows:", "### Data Fields\n\n- 'date': date of publication\n- 'description': description or a summary of the article\n- 'domain': source domain of the article (i.e. URL)\n- 'image_url': URL of the article's image\n- 'text': the actual article text in raw form\n- 'title': title of the article\n- 'url': article URL, the original URL where it was scraped.", "### Data Splits\n\nCC-News dataset has only the training set, i.e. it has to be loaded with 'train' split specified:\n'cc_news = load_dataset('cc_news', split=\"train\")'", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\nCC-News dataset has been proposed, created, and maintained by Sebastian Nagel. \nThe data is publicly available on AWS S3 Common Crawl bucket at /crawl-data/CC-NEWS/. \nThis version of the dataset has been prepared using news-please - an \nintegrated web crawler and information extractor for news. \nIt contains 708241 English language news articles published between Jan 2017 and December 2019.\nAlthough news-please tags each news article with an appropriate language tag, these tags are somewhat unreliable. \nTo strictly isolate English language articles an additional check has been performed using \nSpacy langdetect pipeline. \nWe selected articles with text fields scores of 80% probability or more of being English.\nThere are no strict guarantees that each article has all the relevant fields. For example, 527595 \narticles have a valid description field. All articles have what appears to be a valid image URL, \nbut they have not been verified.", "#### Who are the source language producers?\n\nThe news websites throughout the World.", "### Annotations", "#### Annotation process\n\n[N/A]", "#### Who are the annotators?\n\n[N/A]", "### Personal and Sensitive Information\n\nAs one can imagine, data contains contemporary public figures or individuals who appeared in the news.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe purpose of this dataset is to help language model researchers develop better language models.", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @vblagoje for adding this dataset." ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-unknown #region-us \n", "# Dataset Card for CC-News", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: CC-News homepage\n- Point of Contact: Vladimir Blagojevic", "### Dataset Summary\n\nCC-News dataset contains news articles from news sites all over the world. The data is available on AWS S3 in the Common Crawl bucket at /crawl-data/CC-NEWS/. \nThis version of the dataset has been prepared using news-please - an integrated web crawler and information extractor for news. \nIt contains 708241 English language news articles published between Jan 2017 and December 2019. \nIt represents a small portion of the English language subset of the CC-News dataset.", "### Supported Tasks and Leaderboards\n\nCC-News has been mostly used for language model training.", "### Languages\n\nThe text in the dataset is in the English language.", "## Dataset Structure", "### Data Instances\n\nDataset instance contains an article itself and the relevant article fields.\nAn example from the Cc-New train set looks as follows:", "### Data Fields\n\n- 'date': date of publication\n- 'description': description or a summary of the article\n- 'domain': source domain of the article (i.e. URL)\n- 'image_url': URL of the article's image\n- 'text': the actual article text in raw form\n- 'title': title of the article\n- 'url': article URL, the original URL where it was scraped.", "### Data Splits\n\nCC-News dataset has only the training set, i.e. it has to be loaded with 'train' split specified:\n'cc_news = load_dataset('cc_news', split=\"train\")'", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\nCC-News dataset has been proposed, created, and maintained by Sebastian Nagel. \nThe data is publicly available on AWS S3 Common Crawl bucket at /crawl-data/CC-NEWS/. \nThis version of the dataset has been prepared using news-please - an \nintegrated web crawler and information extractor for news. \nIt contains 708241 English language news articles published between Jan 2017 and December 2019.\nAlthough news-please tags each news article with an appropriate language tag, these tags are somewhat unreliable. \nTo strictly isolate English language articles an additional check has been performed using \nSpacy langdetect pipeline. \nWe selected articles with text fields scores of 80% probability or more of being English.\nThere are no strict guarantees that each article has all the relevant fields. For example, 527595 \narticles have a valid description field. All articles have what appears to be a valid image URL, \nbut they have not been verified.", "#### Who are the source language producers?\n\nThe news websites throughout the World.", "### Annotations", "#### Annotation process\n\n[N/A]", "#### Who are the annotators?\n\n[N/A]", "### Personal and Sensitive Information\n\nAs one can imagine, data contains contemporary public figures or individuals who appeared in the news.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe purpose of this dataset is to help language model researchers develop better language models.", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @vblagoje for adding this dataset." ]
[ 111, 8, 120, 22, 116, 22, 16, 6, 36, 96, 57, 5, 7, 4, 219, 17, 5, 10, 14, 30, 8, 25, 8, 7, 5, 6, 6, 18 ]
[ "passage: TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-unknown #region-us \n# Dataset Card for CC-News## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: CC-News homepage\n- Point of Contact: Vladimir Blagojevic### Dataset Summary\n\nCC-News dataset contains news articles from news sites all over the world. The data is available on AWS S3 in the Common Crawl bucket at /crawl-data/CC-NEWS/. \nThis version of the dataset has been prepared using news-please - an integrated web crawler and information extractor for news. \nIt contains 708241 English language news articles published between Jan 2017 and December 2019. \nIt represents a small portion of the English language subset of the CC-News dataset.### Supported Tasks and Leaderboards\n\nCC-News has been mostly used for language model training.### Languages\n\nThe text in the dataset is in the English language.## Dataset Structure### Data Instances\n\nDataset instance contains an article itself and the relevant article fields.\nAn example from the Cc-New train set looks as follows:" ]
732e0c60b22e16ea2fddcf7b10e4eeff64f88caa
# Dataset Card for ccaligned_multilingual ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://www.statmt.org/cc-aligned/ - **Repository:** [Needs More Information] - **Paper:** https://www.aclweb.org/anthology/2020.emnlp-main.480.pdf - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary CCAligned consists of parallel or comparable web-document pairs in 137 languages aligned with English. These web-document pairs were constructed by performing language identification on raw web-documents, and ensuring corresponding language codes were corresponding in the URLs of web documents. This pattern matching approach yielded more than 100 million aligned documents paired with English. Recognizing that each English document was often aligned to mulitple documents in different target language, we can join on English documents to obtain aligned documents that directly pair two non-English documents (e.g., Arabic-French). This corpus was created from 68 Commoncrawl Snapshots. To load a language which isn't part of the config, all you need to do is specify the language code. You can find the valid languages in http://www.statmt.org/cc-aligned/ E.g. ``` dataset = load_dataset("ccaligned_multilingual", language_code="fr_XX", type="documents") ``` or ``` dataset = load_dataset("ccaligned_multilingual", language_code="fr_XX", type="sentences") ``` ### Supported Tasks and Leaderboards [Needs More Information] ### Languages The text in the dataset is in (137) multiple languages aligned with english. ## Dataset Structure ### Data Instances An instance of `documents` type for language `ak_GH`: ``` {'Domain': 'islamhouse.com', 'Source_URL': 'https://islamhouse.com/en/audios/373088/', 'Target_URL': 'https://islamhouse.com/ak/audios/373088/', 'translation': {'ak_GH': "Ntwatiaa / wɔabɔ no tɔfa wɔ mu no te ase ma Umrah - Arab kasa|Islamhouse.com|Follow us:|facebook|twitter|taepe|Titles All|Fie wibesite|kasa nyina|Buukuu edi adanse ma prente|Nhyehyɛmu|Nyim/sua Islam|Curriculums|Nyina ndeɛma|Nyina ndeɛma (295)|Buukuu/ nwoma (2)|sini / muuvi (31)|ɔdio (262)|Aɛn websideNew!|Kɔ wura kramosom mu seisei|Ebio|figa/kaasɛ|Farebae|AKAkan|Kratafa titriw|kasa interface( anyimu) : Akan|Kasa ma no mu-nsɛm : Arab kasa|ɔdio|Ntwatiaa / wɔabɔ no tɔfa wɔ mu no te ase ma Umrah|play|pause|stop|mute|unmute|max volume|Kasakyerɛ ni :|Farebae:|17 / 11 / 1432 , 15/10/2011|Nhyehyɛmu:|Jurisprudence/ Esum Nimdea|Som|Hajj na Umrah|Jurisprudence/ Esum Nimdea|Som|Hajj na Umrah|Mmira ma Hajj na Umrah|nkyerɛmu|kasamu /sɛntɛns ma te ase na Umrah wɔ ... mu no hann ma no Quran na Sunnah na te ase ma no nana na no kasamu /sɛntɛns ma bi ma no emerging yi adu obusuani|Akenkane we ye di ko kasa bi su (36)|Afar - Qafár afa|Akan|Amhari ne - አማርኛ|Arab kasa - عربي|Assamese - অসমীয়া|Bengali - বাংলা|Maldive - ދިވެހި|Greek - Ελληνικά|English ( brofo kasa) - English|Persian - فارسی|Fula - pulla|French - Français|Hausa - Hausa|Kurdish - كوردی سۆرانی|Uganda ne - Oluganda|Mandinka - Mandinko|Malayalam - മലയാളം|Nepali - नेपाली|Portuguese - Português|Russian - Русский|Sango - Sango|Sinhalese - සිංහල|Somali - Soomaali|Albania ne - Shqip|Swahili - Kiswahili|Telugu - తెలుగు ప్రజలు|Tajik - Тоҷикӣ|Thai - ไทย|Tagalog - Tagalog|Turkish - Türkçe|Uyghur - ئۇيغۇرچە|Urdu - اردو|Uzbeck ne - Ўзбек тили|Vietnamese - Việt Nam|Wolof - Wolof|Chine ne - 中文|Soma kɔ bi kyerɛ adwen kɔ wɛb ebusuapanin|Soma kɔ ne kɔ hom adamfo|Soma kɔ bi kyerɛ adwen kɔ wɛb ebusuapanin|Nsɔwso fael (1)|1|الموجز في فقه العمرة|MP3 14.7 MB|Enoumah ebatahu|Rituals/Esom ajomadie ewu Hajji mmire .. 1434 AH [01] no fapemso Enum|Fiidbak/ Ye hiya wu jun kyiri|Lenke de yɛe|kɔntakt yɛn|Aɛn webside|Qura'an Kro kronkrom|Balagh|wɔ mfinimfin Dowload faele|Yɛ atuu bra Islam mu afei|Tsin de yɛe ewu|Anaa bomu/combine hɛn melin liste|© Islamhouse Website/ Islam dan webi site|×|×|Yi mu kasa|", 'en_XX': 'SUMMARY in the jurisprudence of Umrah - Arabic - Abdul Aziz Bin Marzooq Al-Turaifi|Islamhouse.com|Follow us:|facebook|twitter|QuranEnc.com|HadeethEnc.com|Type|Titles All|Home Page|All Languages|Categories|Know about Islam|All items|All items (4057)|Books (701)|Articles (548)|Fatawa (370)|Videos (1853)|Audios (416)|Posters (98)|Greeting cards (22)|Favorites (25)|Applications (21)|Desktop Applications (3)|To convert to Islam now !|More|Figures|Sources|Curriculums|Our Services|QuranEnc.com|HadeethEnc.com|ENEnglish|Main Page|Interface Language : English|Language of the content : Arabic|Audios|تعريب عنوان المادة|SUMMARY in the jurisprudence of Umrah|play|pause|stop|mute|unmute|max volume|Lecturer : Abdul Aziz Bin Marzooq Al-Turaifi|Sources:|AlRaya Islamic Recoding in Riyadh|17 / 11 / 1432 , 15/10/2011|Categories:|Islamic Fiqh|Fiqh of Worship|Hajj and Umrah|Islamic Fiqh|Fiqh of Worship|Hajj and Umrah|Pilgrimage and Umrah|Description|SUMMARY in jurisprudence of Umrah: A statement of jurisprudence and Umrah in the light of the Quran and Sunnah and understanding of the Ancestors and the statement of some of the emerging issues related to them.|This page translated into (36)|Afar - Qafár afa|Akane - Akan|Amharic - አማርኛ|Arabic - عربي|Assamese - অসমীয়া|Bengali - বাংলা|Maldivi - ދިވެހި|Greek - Ελληνικά|English|Persian - فارسی|Fula - pulla|French - Français|Hausa - Hausa|kurdish - كوردی سۆرانی|Ugandan - Oluganda|Mandinka - Mandinko|Malayalam - മലയാളം|Nepali - नेपाली|Portuguese - Português|Russian - Русский|Sango - Yanga ti Sango|Sinhalese - සිංහල|Somali - Soomaali|Albanian - Shqip|Swahili - Kiswahili|Telugu - తెలుగు|Tajik - Тоҷикӣ|Thai - ไทย|Tagalog - Tagalog|Turkish - Türkçe|Uyghur - ئۇيغۇرچە|Urdu - اردو|Uzbek - Ўзбек тили|Vietnamese - Việt Nam|Wolof - Wolof|Chinese - 中文|Send a comment to Webmaster|Send to a friend?|Send a comment to Webmaster|Attachments (1)|1|الموجز في فقه العمرة|MP3 14.7 MB|The relevant Material|The rituals of the pilgrimage season .. 1434 AH [ 01] the fifth pillar|The Quality of the Accepted Hajj (Piligrimage) and Its Limitations|Easy Path to the Rules of the Rites of Hajj|A Call to the Pilgrims of the Scared House of Allah|More|feedback|Important links|Contact us|Privacy policy|Islam Q&A|Learning Arabic Language|About Us|Convert To Islam|Noble Quran encyclopedia|IslamHouse.com Reader|Encyclopedia of Translated Prophetic Hadiths|Our Services|The Quran|Balagh|Center for downloading files|To embrace Islam now...|Follow us through|Or join our mailing list.|© Islamhouse Website|×|×|Choose language|'}} ``` An instance of `sentences` type for language `ak_GH`: ``` {'LASER_similarity': 1.4549942016601562, 'translation': {'ak_GH': 'Salah (nyamefere) ye Mmerebeia', 'en_XX': 'What he dislikes when fasting (10)'}} ``` ### Data Fields For `documents` type: - `Domain`: a `string` feature containing the domain. - `Source_URL`: a `string` feature containing the source URL. - `Target_URL`: a `string` feature containing the target URL. - `translation`: a `dictionary` feature with two keys : - `en_XX`: a `string` feature containing the content in English. - <language_code>: a `string` feature containing the content in the `language_code` specified. For `sentences` type: - `LASER_similarity`: a `float32` feature representing the LASER similarity score. - `translation`: a `dictionary` feature with two keys : - `en_XX`: a `string` feature containing the content in English. - <language_code>: a `string` feature containing the content in the `language_code` specified. ### Data Splits Split sizes of some small configurations: | name |train| |----------|----:| |documents-zz_TR|41| |sentences-zz_TR|34| |documents-tz_MA|4| |sentences-tz_MA|33| |documents-ak_GH|249| |sentences-ak_GH|478| ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset. ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information ``` @inproceedings{elkishky_ccaligned_2020, author = {El-Kishky, Ahmed and Chaudhary, Vishrav and Guzm{\'a}n, Francisco and Koehn, Philipp}, booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020)}, month = {November}, title = {{CCAligned}: A Massive Collection of Cross-lingual Web-Document Pairs}, year = {2020} address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-main.480", doi = "10.18653/v1/2020.emnlp-main.480", pages = "5960--5969" } ``` ### Contributions Thanks to [@gchhablani](https://github.com/gchhablani) for adding this dataset.
ccaligned_multilingual
[ "task_categories:other", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:translation", "size_categories:n<1K", "size_categories:1K<n<10K", "size_categories:10K<n<100K", "size_categories:100K<n<1M", "size_categories:1M<n<10M", "size_categories:10M<n<100M", "source_datasets:original", "language:af", "language:ak", "language:am", "language:ar", "language:as", "language:ay", "language:az", "language:be", "language:bg", "language:bm", "language:bn", "language:br", "language:bs", "language:ca", "language:ceb", "language:ckb", "language:cs", "language:cy", "language:de", "language:dv", "language:el", "language:eo", "language:es", "language:fa", "language:ff", "language:fi", "language:fo", "language:fr", "language:fy", "language:ga", "language:gl", "language:gn", "language:gu", "language:he", "language:hi", "language:hr", "language:hu", "language:id", "language:ig", "language:is", "language:it", "language:iu", "language:ja", "language:ka", "language:kac", "language:kg", "language:kk", "language:km", "language:kn", "language:ko", "language:ku", "language:ky", "language:la", "language:lg", "language:li", "language:ln", "language:lo", "language:lt", "language:lv", "language:mg", "language:mi", "language:mk", "language:ml", "language:mn", "language:mr", "language:ms", "language:mt", "language:my", "language:ne", "language:nl", "language:no", "language:nso", "language:ny", "language:om", "language:or", "language:pa", "language:pl", "language:ps", "language:pt", "language:rm", "language:ro", "language:ru", "language:rw", "language:sc", "language:sd", "language:se", "language:shn", "language:si", "language:sk", "language:sl", "language:sn", "language:so", "language:sq", "language:sr", "language:ss", "language:st", "language:su", "language:sv", "language:sw", "language:syc", "language:szl", "language:ta", "language:te", "language:tg", "language:th", "language:ti", "language:tl", "language:tn", "language:tr", "language:ts", "language:tt", "language:ug", "language:uk", "language:ur", "language:uz", "language:ve", "language:vi", "language:war", "language:wo", "language:xh", "language:yi", "language:yo", "language:zgh", "language:zh", "language:zu", "language:zza", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["af", "ak", "am", "ar", "as", "ay", "az", "be", "bg", "bm", "bn", "br", "bs", "ca", "ceb", "ckb", "cs", "cy", "de", "dv", "el", "eo", "es", "fa", "ff", "fi", "fo", "fr", "fy", "ga", "gl", "gn", "gu", "he", "hi", "hr", "hu", "id", "ig", "is", "it", "iu", "ja", "ka", "kac", "kg", "kk", "km", "kn", "ko", "ku", "ky", "la", "lg", "li", "ln", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "nso", "ny", "om", "or", "pa", "pl", "ps", "pt", "rm", "ro", "ru", "rw", "sc", "sd", "se", "shn", "si", "sk", "sl", "sn", "so", "sq", "sr", "ss", "st", "su", "sv", "sw", "syc", "szl", "ta", "te", "tg", "th", "ti", "tl", "tn", "tr", "ts", "tt", "ug", "uk", "ur", "uz", "ve", "vi", "war", "wo", "xh", "yi", "yo", "zgh", "zh", "zu", "zza"], "license": ["unknown"], "multilinguality": ["translation"], "size_categories": ["n<1K", "1K<n<10K", "10K<n<100K", "100K<n<1M", "1M<n<10M", "10M<n<100M"], "source_datasets": ["original"], "task_categories": ["other"], "paperswithcode_id": "ccaligned", "pretty_name": "CCAligned", "dataset_info": [{"config_name": "documents-zz_TR", "features": [{"name": "Domain", "dtype": "string"}, {"name": "Source_URL", "dtype": "string"}, {"name": "Target_URL", "dtype": "string"}, {"name": "translation", "dtype": {"translation": {"languages": ["en_XX", "zz_TR"]}}}], "splits": [{"name": "train", "num_bytes": 641412, "num_examples": 41}], "download_size": 125488, "dataset_size": 641412}, {"config_name": "sentences-zz_TR", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en_XX", "zz_TR"]}}}, {"name": "LASER_similarity", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 4056, "num_examples": 34}], "download_size": 1428, "dataset_size": 4056}, {"config_name": "documents-tz_MA", "features": [{"name": "Domain", "dtype": "string"}, {"name": "Source_URL", "dtype": "string"}, {"name": "Target_URL", "dtype": "string"}, {"name": "translation", "dtype": {"translation": {"languages": ["en_XX", "tz_MA"]}}}], "splits": [{"name": "train", "num_bytes": 51782, "num_examples": 4}], "download_size": 11996, "dataset_size": 51782}, {"config_name": "sentences-tz_MA", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en_XX", "tz_MA"]}}}, {"name": "LASER_similarity", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 6256, "num_examples": 33}], "download_size": 2420, "dataset_size": 6256}, {"config_name": "documents-ak_GH", "features": [{"name": "Domain", "dtype": "string"}, {"name": "Source_URL", "dtype": "string"}, {"name": "Target_URL", "dtype": "string"}, {"name": "translation", "dtype": {"translation": {"languages": ["en_XX", "ak_GH"]}}}], "splits": [{"name": "train", "num_bytes": 10738312, "num_examples": 249}], "download_size": 399236, "dataset_size": 10738312}, {"config_name": "sentences-ak_GH", "features": [{"name": "translation", "dtype": {"translation": {"languages": ["en_XX", "ak_GH"]}}}, {"name": "LASER_similarity", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 50110, "num_examples": 478}], "download_size": 17636, "dataset_size": 50110}]}
2024-01-18T11:02:11+00:00
[]
[ "af", "ak", "am", "ar", "as", "ay", "az", "be", "bg", "bm", "bn", "br", "bs", "ca", "ceb", "ckb", "cs", "cy", "de", "dv", "el", "eo", "es", "fa", "ff", "fi", "fo", "fr", "fy", "ga", "gl", "gn", "gu", "he", "hi", "hr", "hu", "id", "ig", "is", "it", "iu", "ja", "ka", "kac", "kg", "kk", "km", "kn", "ko", "ku", "ky", "la", "lg", "li", "ln", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "nso", "ny", "om", "or", "pa", "pl", "ps", "pt", "rm", "ro", "ru", "rw", "sc", "sd", "se", "shn", "si", "sk", "sl", "sn", "so", "sq", "sr", "ss", "st", "su", "sv", "sw", "syc", "szl", "ta", "te", "tg", "th", "ti", "tl", "tn", "tr", "ts", "tt", "ug", "uk", "ur", "uz", "ve", "vi", "war", "wo", "xh", "yi", "yo", "zgh", "zh", "zu", "zza" ]
TAGS #task_categories-other #annotations_creators-no-annotation #language_creators-found #multilinguality-translation #size_categories-n<1K #size_categories-1K<n<10K #size_categories-10K<n<100K #size_categories-100K<n<1M #size_categories-1M<n<10M #size_categories-10M<n<100M #source_datasets-original #language-Afrikaans #language-Akan #language-Amharic #language-Arabic #language-Assamese #language-Aymara #language-Azerbaijani #language-Belarusian #language-Bulgarian #language-Bambara #language-Bengali #language-Breton #language-Bosnian #language-Catalan #language-Cebuano #language-Central Kurdish #language-Czech #language-Welsh #language-German #language-Dhivehi #language-Modern Greek (1453-) #language-Esperanto #language-Spanish #language-Persian #language-Fulah #language-Finnish #language-Faroese #language-French #language-Western Frisian #language-Irish #language-Galician #language-Guarani #language-Gujarati #language-Hebrew #language-Hindi #language-Croatian #language-Hungarian #language-Indonesian #language-Igbo #language-Icelandic #language-Italian #language-Inuktitut #language-Japanese #language-Georgian #language-Kachin #language-Kongo #language-Kazakh #language-Khmer #language-Kannada #language-Korean #language-Kurdish #language-Kirghiz #language-Latin #language-Ganda #language-Limburgan #language-Lingala #language-Lao #language-Lithuanian #language-Latvian #language-Malagasy #language-Maori #language-Macedonian #language-Malayalam #language-Mongolian #language-Marathi #language-Malay (macrolanguage) #language-Maltese #language-Burmese #language-Nepali (macrolanguage) #language-Dutch #language-Norwegian #language-Pedi #language-Nyanja #language-Oromo #language-Oriya (macrolanguage) #language-Panjabi #language-Polish #language-Pushto #language-Portuguese #language-Romansh #language-Romanian #language-Russian #language-Kinyarwanda #language-Sardinian #language-Sindhi #language-Northern Sami #language-Shan #language-Sinhala #language-Slovak #language-Slovenian #language-Shona #language-Somali #language-Albanian #language-Serbian #language-Swati #language-Southern Sotho #language-Sundanese #language-Swedish #language-Swahili (macrolanguage) #language-Classical Syriac #language-Silesian #language-Tamil #language-Telugu #language-Tajik #language-Thai #language-Tigrinya #language-Tagalog #language-Tswana #language-Turkish #language-Tsonga #language-Tatar #language-Uighur #language-Ukrainian #language-Urdu #language-Uzbek #language-Venda #language-Vietnamese #language-Waray (Philippines) #language-Wolof #language-Xhosa #language-Yiddish #language-Yoruba #language-Standard Moroccan Tamazight #language-Chinese #language-Zulu #language-Zaza #license-unknown #region-us
Dataset Card for ccaligned\_multilingual ======================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: URL * Leaderboard: * Point of Contact: ### Dataset Summary CCAligned consists of parallel or comparable web-document pairs in 137 languages aligned with English. These web-document pairs were constructed by performing language identification on raw web-documents, and ensuring corresponding language codes were corresponding in the URLs of web documents. This pattern matching approach yielded more than 100 million aligned documents paired with English. Recognizing that each English document was often aligned to mulitple documents in different target language, we can join on English documents to obtain aligned documents that directly pair two non-English documents (e.g., Arabic-French). This corpus was created from 68 Commoncrawl Snapshots. To load a language which isn't part of the config, all you need to do is specify the language code. You can find the valid languages in URL E.g. or ### Supported Tasks and Leaderboards ### Languages The text in the dataset is in (137) multiple languages aligned with english. Dataset Structure ----------------- ### Data Instances An instance of 'documents' type for language 'ak\_GH': An instance of 'sentences' type for language 'ak\_GH': ### Data Fields For 'documents' type: * 'Domain': a 'string' feature containing the domain. * 'Source\_URL': a 'string' feature containing the source URL. * 'Target\_URL': a 'string' feature containing the target URL. * 'translation': a 'dictionary' feature with two keys : + 'en\_XX': a 'string' feature containing the content in English. + <language\_code>: a 'string' feature containing the content in the 'language\_code' specified. For 'sentences' type: * 'LASER\_similarity': a 'float32' feature representing the LASER similarity score. * 'translation': a 'dictionary' feature with two keys : + 'en\_XX': a 'string' feature containing the content in English. + <language\_code>: a 'string' feature containing the content in the 'language\_code' specified. ### Data Splits Split sizes of some small configurations: Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @gchhablani for adding this dataset.
[ "### Dataset Summary\n\n\nCCAligned consists of parallel or comparable web-document pairs in 137 languages aligned with English. These web-document pairs were constructed by performing language identification on raw web-documents, and ensuring corresponding language codes were corresponding in the URLs of web documents. This pattern matching approach yielded more than 100 million aligned documents paired with English. Recognizing that each English document was often aligned to mulitple documents in different target language, we can join on English documents to obtain aligned documents that directly pair two non-English documents (e.g., Arabic-French). This corpus was created from 68 Commoncrawl Snapshots.\n\n\nTo load a language which isn't part of the config, all you need to do is specify the language code. You can find the valid languages in URL E.g.\n\n\nor", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe text in the dataset is in (137) multiple languages aligned with english.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn instance of 'documents' type for language 'ak\\_GH':\n\n\nAn instance of 'sentences' type for language 'ak\\_GH':", "### Data Fields\n\n\nFor 'documents' type:\n\n\n* 'Domain': a 'string' feature containing the domain.\n* 'Source\\_URL': a 'string' feature containing the source URL.\n* 'Target\\_URL': a 'string' feature containing the target URL.\n* 'translation': a 'dictionary' feature with two keys :\n\t+ 'en\\_XX': a 'string' feature containing the content in English.\n\t+ <language\\_code>: a 'string' feature containing the content in the 'language\\_code' specified.\n\n\nFor 'sentences' type:\n\n\n* 'LASER\\_similarity': a 'float32' feature representing the LASER similarity score.\n* 'translation': a 'dictionary' feature with two keys :\n\t+ 'en\\_XX': a 'string' feature containing the content in English.\n\t+ <language\\_code>: a 'string' feature containing the content in the 'language\\_code' specified.", "### Data Splits\n\n\nSplit sizes of some small configurations:\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @gchhablani for adding this dataset." ]
[ "TAGS\n#task_categories-other #annotations_creators-no-annotation #language_creators-found #multilinguality-translation #size_categories-n<1K #size_categories-1K<n<10K #size_categories-10K<n<100K #size_categories-100K<n<1M #size_categories-1M<n<10M #size_categories-10M<n<100M #source_datasets-original #language-Afrikaans #language-Akan #language-Amharic #language-Arabic #language-Assamese #language-Aymara #language-Azerbaijani #language-Belarusian #language-Bulgarian #language-Bambara #language-Bengali #language-Breton #language-Bosnian #language-Catalan #language-Cebuano #language-Central Kurdish #language-Czech #language-Welsh #language-German #language-Dhivehi #language-Modern Greek (1453-) #language-Esperanto #language-Spanish #language-Persian #language-Fulah #language-Finnish #language-Faroese #language-French #language-Western Frisian #language-Irish #language-Galician #language-Guarani #language-Gujarati #language-Hebrew #language-Hindi #language-Croatian #language-Hungarian #language-Indonesian #language-Igbo #language-Icelandic #language-Italian #language-Inuktitut #language-Japanese #language-Georgian #language-Kachin #language-Kongo #language-Kazakh #language-Khmer #language-Kannada #language-Korean #language-Kurdish #language-Kirghiz #language-Latin #language-Ganda #language-Limburgan #language-Lingala #language-Lao #language-Lithuanian #language-Latvian #language-Malagasy #language-Maori #language-Macedonian #language-Malayalam #language-Mongolian #language-Marathi #language-Malay (macrolanguage) #language-Maltese #language-Burmese #language-Nepali (macrolanguage) #language-Dutch #language-Norwegian #language-Pedi #language-Nyanja #language-Oromo #language-Oriya (macrolanguage) #language-Panjabi #language-Polish #language-Pushto #language-Portuguese #language-Romansh #language-Romanian #language-Russian #language-Kinyarwanda #language-Sardinian #language-Sindhi #language-Northern Sami #language-Shan #language-Sinhala #language-Slovak #language-Slovenian #language-Shona #language-Somali #language-Albanian #language-Serbian #language-Swati #language-Southern Sotho #language-Sundanese #language-Swedish #language-Swahili (macrolanguage) #language-Classical Syriac #language-Silesian #language-Tamil #language-Telugu #language-Tajik #language-Thai #language-Tigrinya #language-Tagalog #language-Tswana #language-Turkish #language-Tsonga #language-Tatar #language-Uighur #language-Ukrainian #language-Urdu #language-Uzbek #language-Venda #language-Vietnamese #language-Waray (Philippines) #language-Wolof #language-Xhosa #language-Yiddish #language-Yoruba #language-Standard Moroccan Tamazight #language-Chinese #language-Zulu #language-Zaza #license-unknown #region-us \n", "### Dataset Summary\n\n\nCCAligned consists of parallel or comparable web-document pairs in 137 languages aligned with English. These web-document pairs were constructed by performing language identification on raw web-documents, and ensuring corresponding language codes were corresponding in the URLs of web documents. This pattern matching approach yielded more than 100 million aligned documents paired with English. Recognizing that each English document was often aligned to mulitple documents in different target language, we can join on English documents to obtain aligned documents that directly pair two non-English documents (e.g., Arabic-French). This corpus was created from 68 Commoncrawl Snapshots.\n\n\nTo load a language which isn't part of the config, all you need to do is specify the language code. You can find the valid languages in URL E.g.\n\n\nor", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe text in the dataset is in (137) multiple languages aligned with english.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn instance of 'documents' type for language 'ak\\_GH':\n\n\nAn instance of 'sentences' type for language 'ak\\_GH':", "### Data Fields\n\n\nFor 'documents' type:\n\n\n* 'Domain': a 'string' feature containing the domain.\n* 'Source\\_URL': a 'string' feature containing the source URL.\n* 'Target\\_URL': a 'string' feature containing the target URL.\n* 'translation': a 'dictionary' feature with two keys :\n\t+ 'en\\_XX': a 'string' feature containing the content in English.\n\t+ <language\\_code>: a 'string' feature containing the content in the 'language\\_code' specified.\n\n\nFor 'sentences' type:\n\n\n* 'LASER\\_similarity': a 'float32' feature representing the LASER similarity score.\n* 'translation': a 'dictionary' feature with two keys :\n\t+ 'en\\_XX': a 'string' feature containing the content in English.\n\t+ <language\\_code>: a 'string' feature containing the content in the 'language\\_code' specified.", "### Data Splits\n\n\nSplit sizes of some small configurations:\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @gchhablani for adding this dataset." ]
[ 863, 197, 10, 29, 40, 233, 20, 7, 4, 10, 10, 5, 5, 9, 50, 7, 8, 14, 6, 6, 18 ]
[ "passage: ", "passage: TAGS\n#task_categories-other #annotations_creators-no-annotation #language_creators-found #multilinguality-translation #size_categories-n<1K #size_categories-1K<n<10K #size_categories-10K<n<100K #size_categories-100K<n<1M #size_categories-1M<n<10M #size_categories-10M<n<100M #source_datasets-original #language-Afrikaans #language-Akan #language-Amharic #language-Arabic #language-Assamese #language-Aymara #language-Azerbaijani #language-Belarusian #language-Bulgarian #language-Bambara #language-Bengali #language-Breton #language-Bosnian #language-Catalan #language-Cebuano #language-Central Kurdish #language-Czech #language-Welsh #language-German #language-Dhivehi #language-Modern Greek (1453-) #language-Esperanto #language-Spanish #language-Persian #language-Fulah #language-Finnish #language-Faroese #language-French #language-Western Frisian #language-Irish #language-Galician #language-Guarani #language-Gujarati #language-Hebrew #language-Hindi #language-Croatian #language-Hungarian #language-Indonesian #language-Igbo #language-Icelandic #language-Italian #language-Inuktitut #language-Japanese #language-Georgian #language-Kachin #language-Kongo #language-Kazakh #language-Khmer #language-Kannada #language-Korean #language-Kurdish #language-Kirghiz #language-Latin #language-Ganda #language-Limburgan #language-Lingala #language-Lao #language-Lithuanian #language-Latvian #language-Malagasy #language-Maori #language-Macedonian #language-Malayalam #language-Mongolian #language-Marathi #language-Malay (macrolanguage) #language-Maltese #language-Burmese #language-Nepali (macrolanguage) #language-Dutch #language-Norwegian #language-Pedi #language-Nyanja #language-Oromo #language-Oriya (macrolanguage) #language-Panjabi #language-Polish #language-Pushto #language-Portuguese #language-Romansh #language-Romanian #language-Russian #language-Kinyarwanda #language-Sardinian #language-Sindhi #language-Northern Sami #language-Shan #language-Sinhala #language-Slovak #language-Slovenian #language-Shona #language-Somali #language-Albanian #language-Serbian #language-Swati #language-Southern Sotho #language-Sundanese #language-Swedish #language-Swahili (macrolanguage) #language-Classical Syriac #language-Silesian #language-Tamil #language-Telugu #language-Tajik #language-Thai #language-Tigrinya #language-Tagalog #language-Tswana #language-Turkish #language-Tsonga #language-Tatar #language-Uighur #language-Ukrainian #language-Urdu #language-Uzbek #language-Venda #language-Vietnamese #language-Waray (Philippines) #language-Wolof #language-Xhosa #language-Yiddish #language-Yoruba #language-Standard Moroccan Tamazight #language-Chinese #language-Zulu #language-Zaza #license-unknown #region-us \n### Dataset Summary\n\n\nCCAligned consists of parallel or comparable web-document pairs in 137 languages aligned with English. These web-document pairs were constructed by performing language identification on raw web-documents, and ensuring corresponding language codes were corresponding in the URLs of web documents. This pattern matching approach yielded more than 100 million aligned documents paired with English. Recognizing that each English document was often aligned to mulitple documents in different target language, we can join on English documents to obtain aligned documents that directly pair two non-English documents (e.g., Arabic-French). This corpus was created from 68 Commoncrawl Snapshots.\n\n\nTo load a language which isn't part of the config, all you need to do is specify the language code. You can find the valid languages in URL E.g.\n\n\nor### Supported Tasks and Leaderboards### Languages\n\n\nThe text in the dataset is in (137) multiple languages aligned with english.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nAn instance of 'documents' type for language 'ak\\_GH':\n\n\nAn instance of 'sentences' type for language 'ak\\_GH':" ]
b54010592d87b35ea7e007a1de9e6a3ed7d35f8b
# Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://zil.ipipan.waw.pl/Scwad/CDSCorpus - **Repository:** - **Paper:** https://aclanthology.org/P17-1073/ - **Leaderboard:** https://klejbenchmark.com/leaderboard/ - **Point of Contact:** [Alina Wróblewska](mailto:alina@ipipan.waw.pl) ### Dataset Summary Polish CDSCorpus consists of 10K Polish sentence pairs which are human-annotated for semantic relatedness and entailment. The dataset may be used for the evaluation of compositional distributional semantics models of Polish. The dataset was presented at ACL 2017. Please refer to the Wróblewska and Krasnowska-Kieraś (2017) for a detailed description of the resource. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Polish ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - pair_ID: id of sentences pairs - sentence_A: first sentence - sentence_B: second sentence for cdsc-e domain: - entailment_judgment: either 'NEUTRAL', 'CONTRADICTION' or 'ENTAILMENT' for cdsc-r domain: - relatedness_score: float representing a reletedness ### Data Splits Data is splitted in train/dev/test split. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information CC BY-NC-SA 4.0 ### Citation Information ``` @inproceedings{wroblewska-krasnowska-kieras-2017-polish, title = "{P}olish evaluation dataset for compositional distributional semantics models", author = "Wr{\'o}blewska, Alina and Krasnowska-Kiera{\'s}, Katarzyna", editor = "Barzilay, Regina and Kan, Min-Yen", booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2017", address = "Vancouver, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P17-1073", doi = "10.18653/v1/P17-1073", pages = "784--792", abstract = "The paper presents a procedure of building an evaluation dataset. for the validation of compositional distributional semantics models estimated for languages other than English. The procedure generally builds on steps designed to assemble the SICK corpus, which contains pairs of English sentences annotated for semantic relatedness and entailment, because we aim at building a comparable dataset. However, the implementation of particular building steps significantly differs from the original SICK design assumptions, which is caused by both lack of necessary extraneous resources for an investigated language and the need for language-specific transformation rules. The designed procedure is verified on Polish, a fusional language with a relatively free word order, and contributes to building a Polish evaluation dataset. The resource consists of 10K sentence pairs which are human-annotated for semantic relatedness and entailment. The dataset may be used for the evaluation of compositional distributional semantics models of Polish.", } ``` ### Contributions Thanks to [@abecadel](https://github.com/abecadel) for adding this dataset.
cdsc
[ "task_categories:other", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:pl", "license:cc-by-nc-sa-4.0", "sentences entailment and relatedness", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["other"], "language": ["pl"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["other"], "task_ids": [], "paperswithcode_id": "polish-cdscorpus", "pretty_name": "Polish CDSCorpus", "tags": ["sentences entailment and relatedness"], "dataset_info": [{"config_name": "cdsc-e", "features": [{"name": "pair_ID", "dtype": "int32"}, {"name": "sentence_A", "dtype": "string"}, {"name": "sentence_B", "dtype": "string"}, {"name": "entailment_judgment", "dtype": {"class_label": {"names": {"0": "NEUTRAL", "1": "CONTRADICTION", "2": "ENTAILMENT"}}}}], "splits": [{"name": "train", "num_bytes": 1381894, "num_examples": 8000}, {"name": "test", "num_bytes": 179392, "num_examples": 1000}, {"name": "validation", "num_bytes": 174654, "num_examples": 1000}], "download_size": 744169, "dataset_size": 1735940}, {"config_name": "cdsc-r", "features": [{"name": "pair_ID", "dtype": "int32"}, {"name": "sentence_A", "dtype": "string"}, {"name": "sentence_B", "dtype": "string"}, {"name": "relatedness_score", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 1349894, "num_examples": 8000}, {"name": "test", "num_bytes": 175392, "num_examples": 1000}, {"name": "validation", "num_bytes": 170654, "num_examples": 1000}], "download_size": 747648, "dataset_size": 1695940}], "configs": [{"config_name": "cdsc-e", "data_files": [{"split": "train", "path": "cdsc-e/train-*"}, {"split": "test", "path": "cdsc-e/test-*"}, {"split": "validation", "path": "cdsc-e/validation-*"}]}, {"config_name": "cdsc-r", "data_files": [{"split": "train", "path": "cdsc-r/train-*"}, {"split": "test", "path": "cdsc-r/test-*"}, {"split": "validation", "path": "cdsc-r/validation-*"}]}]}
2024-01-18T08:46:51+00:00
[]
[ "pl" ]
TAGS #task_categories-other #annotations_creators-expert-generated #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Polish #license-cc-by-nc-sa-4.0 #sentences entailment and relatedness #region-us
# Dataset Card for [Dataset Name] ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: URL - Leaderboard: URL - Point of Contact: Alina Wróblewska ### Dataset Summary Polish CDSCorpus consists of 10K Polish sentence pairs which are human-annotated for semantic relatedness and entailment. The dataset may be used for the evaluation of compositional distributional semantics models of Polish. The dataset was presented at ACL 2017. Please refer to the Wróblewska and Krasnowska-Kieraś (2017) for a detailed description of the resource. ### Supported Tasks and Leaderboards ### Languages Polish ## Dataset Structure ### Data Instances ### Data Fields - pair_ID: id of sentences pairs - sentence_A: first sentence - sentence_B: second sentence for cdsc-e domain: - entailment_judgment: either 'NEUTRAL', 'CONTRADICTION' or 'ENTAILMENT' for cdsc-r domain: - relatedness_score: float representing a reletedness ### Data Splits Data is splitted in train/dev/test split. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Dataset provided for research purposes only. Please check dataset license for additional information. ## Additional Information ### Dataset Curators ### Licensing Information CC BY-NC-SA 4.0 ### Contributions Thanks to @abecadel for adding this dataset.
[ "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper: URL\n- Leaderboard: URL\n- Point of Contact: Alina Wróblewska", "### Dataset Summary\n\nPolish CDSCorpus consists of 10K Polish sentence pairs which are human-annotated for semantic relatedness and entailment. The dataset may be used for the evaluation of compositional distributional semantics models of Polish. The dataset was presented at ACL 2017. Please refer to the Wróblewska and Krasnowska-Kieraś (2017) for a detailed description of the resource.", "### Supported Tasks and Leaderboards", "### Languages\n\nPolish", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- pair_ID: id of sentences pairs\n- sentence_A: first sentence\n- sentence_B: second sentence\n\nfor cdsc-e domain:\n- entailment_judgment: either 'NEUTRAL', 'CONTRADICTION' or 'ENTAILMENT'\n\nfor cdsc-r domain:\n- relatedness_score: float representing a reletedness", "### Data Splits\n\nData is splitted in train/dev/test split.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\nDataset provided for research purposes only. Please check dataset license for additional information.", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCC BY-NC-SA 4.0", "### Contributions\n\nThanks to @abecadel for adding this dataset." ]
[ "TAGS\n#task_categories-other #annotations_creators-expert-generated #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Polish #license-cc-by-nc-sa-4.0 #sentences entailment and relatedness #region-us \n", "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper: URL\n- Leaderboard: URL\n- Point of Contact: Alina Wróblewska", "### Dataset Summary\n\nPolish CDSCorpus consists of 10K Polish sentence pairs which are human-annotated for semantic relatedness and entailment. The dataset may be used for the evaluation of compositional distributional semantics models of Polish. The dataset was presented at ACL 2017. Please refer to the Wróblewska and Krasnowska-Kieraś (2017) for a detailed description of the resource.", "### Supported Tasks and Leaderboards", "### Languages\n\nPolish", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- pair_ID: id of sentences pairs\n- sentence_A: first sentence\n- sentence_B: second sentence\n\nfor cdsc-e domain:\n- entailment_judgment: either 'NEUTRAL', 'CONTRADICTION' or 'ENTAILMENT'\n\nfor cdsc-r domain:\n- relatedness_score: float representing a reletedness", "### Data Splits\n\nData is splitted in train/dev/test split.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\nDataset provided for research purposes only. Please check dataset license for additional information.", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCC BY-NC-SA 4.0", "### Contributions\n\nThanks to @abecadel for adding this dataset." ]
[ 90, 10, 120, 33, 96, 10, 6, 6, 6, 89, 17, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 25, 5, 6, 13, 17 ]
[ "passage: TAGS\n#task_categories-other #annotations_creators-expert-generated #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Polish #license-cc-by-nc-sa-4.0 #sentences entailment and relatedness #region-us \n# Dataset Card for [Dataset Name]## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper: URL\n- Leaderboard: URL\n- Point of Contact: Alina Wróblewska### Dataset Summary\n\nPolish CDSCorpus consists of 10K Polish sentence pairs which are human-annotated for semantic relatedness and entailment. The dataset may be used for the evaluation of compositional distributional semantics models of Polish. The dataset was presented at ACL 2017. Please refer to the Wróblewska and Krasnowska-Kieraś (2017) for a detailed description of the resource.### Supported Tasks and Leaderboards### Languages\n\nPolish## Dataset Structure### Data Instances### Data Fields\n\n- pair_ID: id of sentences pairs\n- sentence_A: first sentence\n- sentence_B: second sentence\n\nfor cdsc-e domain:\n- entailment_judgment: either 'NEUTRAL', 'CONTRADICTION' or 'ENTAILMENT'\n\nfor cdsc-r domain:\n- relatedness_score: float representing a reletedness### Data Splits\n\nData is splitted in train/dev/test split.## Dataset Creation### Curation Rationale### Source Data" ]
6c872f54a00a2bd65b1e502b5221dd1161d30789
# Dataset Card for [Dataset Name] ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://2019.poleval.pl/index.php/tasks/ - **Repository:** https://github.com/ptaszynski/cyberbullying-Polish - **Paper:** - **Leaderboard:** https://klejbenchmark.com/leaderboard/ - **Point of Contact:** ### Dataset Summary The Cyberbullying Detection task was part of 2019 edition of PolEval competition. The goal is to predict if a given Twitter message contains a cyberbullying (harmful) content. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Polish ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields - sentence: an anonymized tweet in polish - target: 1 if tweet is described as bullying, 0 otherwise. The test set doesn't have labels so -1 is used instead. ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information BSD 3-Clause ### Citation Information [More Information Needed] ### Contributions Thanks to [@abecadel](https://github.com/abecadel) for adding this dataset.
cdt
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:pl", "license:bsd-3-clause", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["other"], "language": ["pl"], "license": ["bsd-3-clause"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "cdt", "dataset_info": {"features": [{"name": "sentence", "dtype": "string"}, {"name": "target", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "train", "num_bytes": 1104314, "num_examples": 10041}, {"name": "test", "num_bytes": 109677, "num_examples": 1000}], "download_size": 649329, "dataset_size": 1213991}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
2024-01-18T14:08:18+00:00
[]
[ "pl" ]
TAGS #task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-expert-generated #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Polish #license-bsd-3-clause #region-us
# Dataset Card for [Dataset Name] ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: - Leaderboard: URL - Point of Contact: ### Dataset Summary The Cyberbullying Detection task was part of 2019 edition of PolEval competition. The goal is to predict if a given Twitter message contains a cyberbullying (harmful) content. ### Supported Tasks and Leaderboards ### Languages Polish ## Dataset Structure ### Data Instances ### Data Fields - sentence: an anonymized tweet in polish - target: 1 if tweet is described as bullying, 0 otherwise. The test set doesn't have labels so -1 is used instead. ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information BSD 3-Clause ### Contributions Thanks to @abecadel for adding this dataset.
[ "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n URL\n- Repository:\n URL\n- Paper:\n- Leaderboard:\n URL\n- Point of Contact:", "### Dataset Summary\n\nThe Cyberbullying Detection task was part of 2019 edition of PolEval competition. The goal is to predict if a given Twitter message contains a cyberbullying (harmful) content.", "### Supported Tasks and Leaderboards", "### Languages\n\nPolish", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- sentence: an anonymized tweet in polish\n- target: 1 if tweet is described as bullying, 0 otherwise. The test set doesn't have labels so -1 is used instead.", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nBSD 3-Clause", "### Contributions\n\nThanks to @abecadel for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-expert-generated #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Polish #license-bsd-3-clause #region-us \n", "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n URL\n- Repository:\n URL\n- Paper:\n- Leaderboard:\n URL\n- Point of Contact:", "### Dataset Summary\n\nThe Cyberbullying Detection task was part of 2019 edition of PolEval competition. The goal is to predict if a given Twitter message contains a cyberbullying (harmful) content.", "### Supported Tasks and Leaderboards", "### Languages\n\nPolish", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- sentence: an anonymized tweet in polish\n- target: 1 if tweet is described as bullying, 0 otherwise. The test set doesn't have labels so -1 is used instead.", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nBSD 3-Clause", "### Contributions\n\nThanks to @abecadel for adding this dataset." ]
[ 92, 10, 120, 27, 47, 10, 6, 6, 6, 45, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 12, 17 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-expert-generated #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Polish #license-bsd-3-clause #region-us \n# Dataset Card for [Dataset Name]## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage:\n URL\n- Repository:\n URL\n- Paper:\n- Leaderboard:\n URL\n- Point of Contact:### Dataset Summary\n\nThe Cyberbullying Detection task was part of 2019 edition of PolEval competition. The goal is to predict if a given Twitter message contains a cyberbullying (harmful) content.### Supported Tasks and Leaderboards### Languages\n\nPolish## Dataset Structure### Data Instances### Data Fields\n\n- sentence: an anonymized tweet in polish\n- target: 1 if tweet is described as bullying, 0 otherwise. The test set doesn't have labels so -1 is used instead.### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information\n\nBSD 3-Clause### Contributions\n\nThanks to @abecadel for adding this dataset." ]
abafbe63cf92c33791b217e8f4f3460f816f1d96
# Dataset Card for [cedr] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [GitHub](https://github.com/sag111/CEDR) - **Repository:** [GitHub](https://github.com/sag111/CEDR) - **Paper:** [ScienceDirect](https://www.sciencedirect.com/science/article/pii/S1877050921013247) - **Leaderboard:** - **Point of Contact:** [@sag111](mailto:sag111@mail.ru) ### Dataset Summary The Corpus for Emotions Detecting in Russian-language text sentences of different social sources (CEDR) contains 9410 comments labeled for 5 emotion categories (joy, sadness, surprise, fear, and anger). Here are 2 dataset configurations: - "main" - contains "text", "labels", and "source" features; - "enriched" - includes all "main" features and "sentences". Dataset with predefined train/test splits. ### Supported Tasks and Leaderboards This dataset is intended for multi-label emotion classification. ### Languages The data is in Russian. ## Dataset Structure ### Data Instances Each instance is a text sentence in Russian from several sources with one or more emotion annotations (or no emotion at all). An example for an instance from the dataset is shown below: ``` { 'text': 'Забавно как люди в возрасте удивляются входящим звонкам на мобильник)', 'labels': [0], 'source': 'twitter', 'sentences': [ [ {'forma': 'Забавно', 'lemma': 'Забавно'}, {'forma': 'как', 'lemma': 'как'}, {'forma': 'люди', 'lemma': 'человек'}, {'forma': 'в', 'lemma': 'в'}, {'forma': 'возрасте', 'lemma': 'возраст'}, {'forma': 'удивляются', 'lemma': 'удивляться'}, {'forma': 'входящим', 'lemma': 'входить'}, {'forma': 'звонкам', 'lemma': 'звонок'}, {'forma': 'на', 'lemma': 'на'}, {'forma': 'мобильник', 'lemma': 'мобильник'}, {'forma': ')', 'lemma': ')'} ] ] } ``` Emotion label codes: {0: "joy", 1: "sadness", 2: "surprise", 3: "fear", 4: "anger"} ### Data Fields The main configuration includes: - text: the text of the sentence; - labels: the emotion annotations; - source: the tag name of the corresponding source In addition to the above, the raw data includes: - sentences: text tokenized and lemmatized with [udpipe](https://ufal.mff.cuni.cz/udpipe) - 'forma': the original word form; - 'lemma': the lemma of this word ### Data Splits The dataset includes a set of train/test splits. with 7528, and 1882 examples respectively. ## Dataset Creation ### Curation Rationale The formed dataset of examples consists of sentences in Russian from several sources (blogs, microblogs, news), which allows creating methods to analyse various types of texts. The created methodology for building the dataset based on applying a crowdsourcing service can be used to expand the number of examples to improve the accuracy of supervised classifiers. ### Source Data #### Initial Data Collection and Normalization Data was collected from several sources: posts of the Live Journal social network, texts of the online news agency Lenta.ru, and Twitter microblog posts. Only those sentences were selected that contained marker words from the dictionary of [the emotive vocabulary of the Russian language](http://lexrus.ru/default.aspx?p=2876). The authors manually formed a list of marker words for each emotion by choosing words from different categories of the dictionary. In total, 3069 sentences were selected from LiveJournal posts, 2851 sentences from Lenta.Ru, and 3490 sentencesfrom Twitter. After selection, sentences were offered to annotators for labeling. #### Who are the source language producers? Russian-speaking LiveJournal and Tweeter users, and authors of news articles on the site lenta.ru. ### Annotations #### Annotation process Annotating sentences with labels of their emotions was performed with the help of [a crowdsourcing platform](https://yandex.ru/support/toloka/index.html?lang=en). The annotators’ task was: “What emotions did the author express in the sentence?”. The annotators were allowed to put an arbitrary number of the following emotion labels: "joy", "sadness", "anger", "fear", and "surprise". If the accuracy of an annotator on the control sentences (including the trial run) became less than 70%, or if the accuracy was less than 66% over the last six control samples, the annotator was dismissed. Sentences were split into tasks and assigned to annotators so that each sentence was annotated at least three times. A label of a specific emotion was assigned to a sentence if put by more than half of the annotators. #### Who are the annotators? Only those of the 30% of the best-performing active users (by the platform’s internal rating) who spoke Russian and were over 18 years old were allowed into the annotation process. Moreover, before a platform user could be employed as an annotator, they underwent a training task, after which they were to mark 25 trial samples with more than 80% agreement compared to the annotation that the authors had performed themselves. ### Personal and Sensitive Information The text of the sentences may contain profanity. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Researchers at AI technology lab at NRC "Kurchatov Institute". See the author [list](https://www.sciencedirect.com/science/article/pii/S1877050921013247). ### Licensing Information The GitHub repository which houses this dataset has an Apache License 2.0. ### Citation Information If you have found our results helpful in your work, feel free to cite our publication. This is an updated version of the dataset, the collection and preparation of which is described here: ``` @article{sboev2021data, title={Data-Driven Model for Emotion Detection in Russian Texts}, author={Sboev, Alexander and Naumov, Aleksandr and Rybka, Roman}, journal={Procedia Computer Science}, volume={190}, pages={637--642}, year={2021}, publisher={Elsevier} } ``` ### Contributions Thanks to [@naumov-al](https://github.com/naumov-al) for adding this dataset.
cedr
[ "task_categories:text-classification", "task_ids:sentiment-classification", "task_ids:multi-label-classification", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:ru", "license:apache-2.0", "emotion-classification", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["ru"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification", "multi-label-classification"], "pretty_name": "The Corpus for Emotions Detecting in Russian-language text sentences (CEDR)", "tags": ["emotion-classification"], "dataset_info": [{"config_name": "enriched", "features": [{"name": "text", "dtype": "string"}, {"name": "labels", "sequence": {"class_label": {"names": {"0": "joy", "1": "sadness", "2": "surprise", "3": "fear", "4": "anger"}}}}, {"name": "source", "dtype": "string"}, {"name": "sentences", "list": {"list": [{"name": "forma", "dtype": "string"}, {"name": "lemma", "dtype": "string"}]}}], "splits": [{"name": "train", "num_bytes": 4792338, "num_examples": 7528}, {"name": "test", "num_bytes": 1182315, "num_examples": 1882}], "download_size": 2571516, "dataset_size": 5974653}, {"config_name": "main", "features": [{"name": "text", "dtype": "string"}, {"name": "labels", "sequence": {"class_label": {"names": {"0": "joy", "1": "sadness", "2": "surprise", "3": "fear", "4": "anger"}}}}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1418343, "num_examples": 7528}, {"name": "test", "num_bytes": 350263, "num_examples": 1882}], "download_size": 945328, "dataset_size": 1768606}], "configs": [{"config_name": "enriched", "data_files": [{"split": "train", "path": "enriched/train-*"}, {"split": "test", "path": "enriched/test-*"}]}, {"config_name": "main", "data_files": [{"split": "train", "path": "main/train-*"}, {"split": "test", "path": "main/test-*"}], "default": true}]}
2024-01-18T14:11:21+00:00
[]
[ "ru" ]
TAGS #task_categories-text-classification #task_ids-sentiment-classification #task_ids-multi-label-classification #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Russian #license-apache-2.0 #emotion-classification #region-us
# Dataset Card for [cedr] ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: GitHub - Repository: GitHub - Paper: ScienceDirect - Leaderboard: - Point of Contact: @sag111 ### Dataset Summary The Corpus for Emotions Detecting in Russian-language text sentences of different social sources (CEDR) contains 9410 comments labeled for 5 emotion categories (joy, sadness, surprise, fear, and anger). Here are 2 dataset configurations: - "main" - contains "text", "labels", and "source" features; - "enriched" - includes all "main" features and "sentences". Dataset with predefined train/test splits. ### Supported Tasks and Leaderboards This dataset is intended for multi-label emotion classification. ### Languages The data is in Russian. ## Dataset Structure ### Data Instances Each instance is a text sentence in Russian from several sources with one or more emotion annotations (or no emotion at all). An example for an instance from the dataset is shown below: Emotion label codes: {0: "joy", 1: "sadness", 2: "surprise", 3: "fear", 4: "anger"} ### Data Fields The main configuration includes: - text: the text of the sentence; - labels: the emotion annotations; - source: the tag name of the corresponding source In addition to the above, the raw data includes: - sentences: text tokenized and lemmatized with udpipe - 'forma': the original word form; - 'lemma': the lemma of this word ### Data Splits The dataset includes a set of train/test splits. with 7528, and 1882 examples respectively. ## Dataset Creation ### Curation Rationale The formed dataset of examples consists of sentences in Russian from several sources (blogs, microblogs, news), which allows creating methods to analyse various types of texts. The created methodology for building the dataset based on applying a crowdsourcing service can be used to expand the number of examples to improve the accuracy of supervised classifiers. ### Source Data #### Initial Data Collection and Normalization Data was collected from several sources: posts of the Live Journal social network, texts of the online news agency URL, and Twitter microblog posts. Only those sentences were selected that contained marker words from the dictionary of the emotive vocabulary of the Russian language. The authors manually formed a list of marker words for each emotion by choosing words from different categories of the dictionary. In total, 3069 sentences were selected from LiveJournal posts, 2851 sentences from Lenta.Ru, and 3490 sentencesfrom Twitter. After selection, sentences were offered to annotators for labeling. #### Who are the source language producers? Russian-speaking LiveJournal and Tweeter users, and authors of news articles on the site URL. ### Annotations #### Annotation process Annotating sentences with labels of their emotions was performed with the help of a crowdsourcing platform. The annotators’ task was: “What emotions did the author express in the sentence?”. The annotators were allowed to put an arbitrary number of the following emotion labels: "joy", "sadness", "anger", "fear", and "surprise". If the accuracy of an annotator on the control sentences (including the trial run) became less than 70%, or if the accuracy was less than 66% over the last six control samples, the annotator was dismissed. Sentences were split into tasks and assigned to annotators so that each sentence was annotated at least three times. A label of a specific emotion was assigned to a sentence if put by more than half of the annotators. #### Who are the annotators? Only those of the 30% of the best-performing active users (by the platform’s internal rating) who spoke Russian and were over 18 years old were allowed into the annotation process. Moreover, before a platform user could be employed as an annotator, they underwent a training task, after which they were to mark 25 trial samples with more than 80% agreement compared to the annotation that the authors had performed themselves. ### Personal and Sensitive Information The text of the sentences may contain profanity. ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators Researchers at AI technology lab at NRC "Kurchatov Institute". See the author list. ### Licensing Information The GitHub repository which houses this dataset has an Apache License 2.0. If you have found our results helpful in your work, feel free to cite our publication. This is an updated version of the dataset, the collection and preparation of which is described here: ### Contributions Thanks to @naumov-al for adding this dataset.
[ "# Dataset Card for [cedr]", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: GitHub\n- Repository: GitHub\n- Paper: ScienceDirect\n- Leaderboard:\n- Point of Contact: @sag111", "### Dataset Summary\n\nThe Corpus for Emotions Detecting in Russian-language text sentences of different social sources (CEDR) contains 9410 comments labeled for 5 emotion categories (joy, sadness, surprise, fear, and anger). \n\nHere are 2 dataset configurations:\n- \"main\" - contains \"text\", \"labels\", and \"source\" features;\n- \"enriched\" - includes all \"main\" features and \"sentences\".\n\nDataset with predefined train/test splits.", "### Supported Tasks and Leaderboards\n\nThis dataset is intended for multi-label emotion classification.", "### Languages\n\nThe data is in Russian.", "## Dataset Structure", "### Data Instances\n\nEach instance is a text sentence in Russian from several sources with one or more emotion annotations (or no emotion at all).\n\nAn example for an instance from the dataset is shown below:\n\n\nEmotion label codes: {0: \"joy\", 1: \"sadness\", 2: \"surprise\", 3: \"fear\", 4: \"anger\"}", "### Data Fields\n\nThe main configuration includes:\n- text: the text of the sentence;\n- labels: the emotion annotations;\n- source: the tag name of the corresponding source\n\nIn addition to the above, the raw data includes:\n- sentences: text tokenized and lemmatized with udpipe\n - 'forma': the original word form;\n - 'lemma': the lemma of this word", "### Data Splits\n\nThe dataset includes a set of train/test splits. \nwith 7528, and 1882 examples respectively.", "## Dataset Creation", "### Curation Rationale\n\nThe formed dataset of examples consists of sentences in Russian from several sources (blogs, microblogs, news), which allows creating methods to analyse various types of texts. The created methodology for building the dataset based on applying a crowdsourcing service can be used to expand the number of examples to improve the accuracy of supervised classifiers.", "### Source Data", "#### Initial Data Collection and Normalization\n\nData was collected from several sources: posts of the Live Journal social network, texts of the online news agency URL, and Twitter microblog posts.\n\nOnly those sentences were selected that contained marker words from the dictionary of the emotive vocabulary of the Russian language. The authors manually formed a list of marker words for each emotion by choosing words from different categories of the dictionary.\n\nIn total, 3069 sentences were selected from LiveJournal posts, 2851 sentences from Lenta.Ru, and 3490 sentencesfrom Twitter. After selection, sentences were offered to annotators for labeling.", "#### Who are the source language producers?\n\nRussian-speaking LiveJournal and Tweeter users, and authors of news articles on the site URL.", "### Annotations", "#### Annotation process\n\nAnnotating sentences with labels of their emotions was performed with the help of a crowdsourcing platform.\n\nThe annotators’ task was: “What emotions did the author express in the sentence?”. The annotators were allowed to put an arbitrary number of the following emotion labels: \"joy\", \"sadness\", \"anger\", \"fear\", and \"surprise\".\n\nIf the accuracy of an annotator on the control sentences (including the trial run) became less than 70%, or if the accuracy was less than 66% over the last six control samples, the annotator was dismissed. \n\nSentences were split into tasks and assigned to annotators so that each sentence was annotated at least three times. A label of a specific emotion was assigned to a sentence if put by more than half of the annotators.", "#### Who are the annotators?\n\nOnly those of the 30% of the best-performing active users (by the platform’s internal rating) who spoke Russian and were over 18 years old were allowed into the annotation process. Moreover, before a platform user could be employed as an annotator, they underwent a training task, after which they were to mark 25 trial samples with more than 80% agreement compared to the annotation that the authors had performed themselves.", "### Personal and Sensitive Information\n\nThe text of the sentences may contain profanity.", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nResearchers at AI technology lab at NRC \"Kurchatov Institute\". See the author list.", "### Licensing Information\n\nThe GitHub repository which houses this dataset has an Apache License 2.0.\n\n\nIf you have found our results helpful in your work, feel free to cite our publication. This is an updated version of the dataset, the collection and preparation of which is described here:", "### Contributions\n\nThanks to @naumov-al for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #task_ids-multi-label-classification #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Russian #license-apache-2.0 #emotion-classification #region-us \n", "# Dataset Card for [cedr]", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: GitHub\n- Repository: GitHub\n- Paper: ScienceDirect\n- Leaderboard:\n- Point of Contact: @sag111", "### Dataset Summary\n\nThe Corpus for Emotions Detecting in Russian-language text sentences of different social sources (CEDR) contains 9410 comments labeled for 5 emotion categories (joy, sadness, surprise, fear, and anger). \n\nHere are 2 dataset configurations:\n- \"main\" - contains \"text\", \"labels\", and \"source\" features;\n- \"enriched\" - includes all \"main\" features and \"sentences\".\n\nDataset with predefined train/test splits.", "### Supported Tasks and Leaderboards\n\nThis dataset is intended for multi-label emotion classification.", "### Languages\n\nThe data is in Russian.", "## Dataset Structure", "### Data Instances\n\nEach instance is a text sentence in Russian from several sources with one or more emotion annotations (or no emotion at all).\n\nAn example for an instance from the dataset is shown below:\n\n\nEmotion label codes: {0: \"joy\", 1: \"sadness\", 2: \"surprise\", 3: \"fear\", 4: \"anger\"}", "### Data Fields\n\nThe main configuration includes:\n- text: the text of the sentence;\n- labels: the emotion annotations;\n- source: the tag name of the corresponding source\n\nIn addition to the above, the raw data includes:\n- sentences: text tokenized and lemmatized with udpipe\n - 'forma': the original word form;\n - 'lemma': the lemma of this word", "### Data Splits\n\nThe dataset includes a set of train/test splits. \nwith 7528, and 1882 examples respectively.", "## Dataset Creation", "### Curation Rationale\n\nThe formed dataset of examples consists of sentences in Russian from several sources (blogs, microblogs, news), which allows creating methods to analyse various types of texts. The created methodology for building the dataset based on applying a crowdsourcing service can be used to expand the number of examples to improve the accuracy of supervised classifiers.", "### Source Data", "#### Initial Data Collection and Normalization\n\nData was collected from several sources: posts of the Live Journal social network, texts of the online news agency URL, and Twitter microblog posts.\n\nOnly those sentences were selected that contained marker words from the dictionary of the emotive vocabulary of the Russian language. The authors manually formed a list of marker words for each emotion by choosing words from different categories of the dictionary.\n\nIn total, 3069 sentences were selected from LiveJournal posts, 2851 sentences from Lenta.Ru, and 3490 sentencesfrom Twitter. After selection, sentences were offered to annotators for labeling.", "#### Who are the source language producers?\n\nRussian-speaking LiveJournal and Tweeter users, and authors of news articles on the site URL.", "### Annotations", "#### Annotation process\n\nAnnotating sentences with labels of their emotions was performed with the help of a crowdsourcing platform.\n\nThe annotators’ task was: “What emotions did the author express in the sentence?”. The annotators were allowed to put an arbitrary number of the following emotion labels: \"joy\", \"sadness\", \"anger\", \"fear\", and \"surprise\".\n\nIf the accuracy of an annotator on the control sentences (including the trial run) became less than 70%, or if the accuracy was less than 66% over the last six control samples, the annotator was dismissed. \n\nSentences were split into tasks and assigned to annotators so that each sentence was annotated at least three times. A label of a specific emotion was assigned to a sentence if put by more than half of the annotators.", "#### Who are the annotators?\n\nOnly those of the 30% of the best-performing active users (by the platform’s internal rating) who spoke Russian and were over 18 years old were allowed into the annotation process. Moreover, before a platform user could be employed as an annotator, they underwent a training task, after which they were to mark 25 trial samples with more than 80% agreement compared to the annotation that the authors had performed themselves.", "### Personal and Sensitive Information\n\nThe text of the sentences may contain profanity.", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nResearchers at AI technology lab at NRC \"Kurchatov Institute\". See the author list.", "### Licensing Information\n\nThe GitHub repository which houses this dataset has an Apache License 2.0.\n\n\nIf you have found our results helpful in your work, feel free to cite our publication. This is an updated version of the dataset, the collection and preparation of which is described here:", "### Contributions\n\nThanks to @naumov-al for adding this dataset." ]
[ 108, 9, 125, 35, 114, 23, 10, 6, 76, 89, 29, 5, 88, 4, 143, 33, 5, 192, 102, 20, 8, 7, 8, 7, 5, 26, 64, 18 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #task_ids-multi-label-classification #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Russian #license-apache-2.0 #emotion-classification #region-us \n# Dataset Card for [cedr]## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: GitHub\n- Repository: GitHub\n- Paper: ScienceDirect\n- Leaderboard:\n- Point of Contact: @sag111### Dataset Summary\n\nThe Corpus for Emotions Detecting in Russian-language text sentences of different social sources (CEDR) contains 9410 comments labeled for 5 emotion categories (joy, sadness, surprise, fear, and anger). \n\nHere are 2 dataset configurations:\n- \"main\" - contains \"text\", \"labels\", and \"source\" features;\n- \"enriched\" - includes all \"main\" features and \"sentences\".\n\nDataset with predefined train/test splits.### Supported Tasks and Leaderboards\n\nThis dataset is intended for multi-label emotion classification.### Languages\n\nThe data is in Russian.## Dataset Structure### Data Instances\n\nEach instance is a text sentence in Russian from several sources with one or more emotion annotations (or no emotion at all).\n\nAn example for an instance from the dataset is shown below:\n\n\nEmotion label codes: {0: \"joy\", 1: \"sadness\", 2: \"surprise\", 3: \"fear\", 4: \"anger\"}", "passage: ### Data Fields\n\nThe main configuration includes:\n- text: the text of the sentence;\n- labels: the emotion annotations;\n- source: the tag name of the corresponding source\n\nIn addition to the above, the raw data includes:\n- sentences: text tokenized and lemmatized with udpipe\n - 'forma': the original word form;\n - 'lemma': the lemma of this word### Data Splits\n\nThe dataset includes a set of train/test splits. \nwith 7528, and 1882 examples respectively.## Dataset Creation### Curation Rationale\n\nThe formed dataset of examples consists of sentences in Russian from several sources (blogs, microblogs, news), which allows creating methods to analyse various types of texts. The created methodology for building the dataset based on applying a crowdsourcing service can be used to expand the number of examples to improve the accuracy of supervised classifiers.### Source Data#### Initial Data Collection and Normalization\n\nData was collected from several sources: posts of the Live Journal social network, texts of the online news agency URL, and Twitter microblog posts.\n\nOnly those sentences were selected that contained marker words from the dictionary of the emotive vocabulary of the Russian language. The authors manually formed a list of marker words for each emotion by choosing words from different categories of the dictionary.\n\nIn total, 3069 sentences were selected from LiveJournal posts, 2851 sentences from Lenta.Ru, and 3490 sentencesfrom Twitter. After selection, sentences were offered to annotators for labeling.#### Who are the source language producers?\n\nRussian-speaking LiveJournal and Tweeter users, and authors of news articles on the site URL.### Annotations#### Annotation process\n\nAnnotating sentences with labels of their emotions was performed with the help of a crowdsourcing platform.\n\nThe annotators’ task was: “What emotions did the author express in the sentence?”. The annotators were allowed to put an arbitrary number of the following emotion labels: \"joy\", \"sadness\", \"anger\", \"fear\", and \"surprise\".\n\nIf the accuracy of an annotator on the control sentences (including the trial run) became less than 70%, or if the accuracy was less than 66% over the last six control samples, the annotator was dismissed. \n\nSentences were split into tasks and assigned to annotators so that each sentence was annotated at least three times. A label of a specific emotion was assigned to a sentence if put by more than half of the annotators." ]
6627f9390245fe11ef09f349b82f6c89f577aabf
# Dataset Card for "cfq" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/google-research/google-research/tree/master/cfq](https://github.com/google-research/google-research/tree/master/cfq) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** https://arxiv.org/abs/1912.09713 - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 2.14 GB - **Size of the generated dataset:** 362.07 MB - **Total amount of disk used:** 2.50 GB ### Dataset Summary The Compositional Freebase Questions (CFQ) is a dataset that is specifically designed to measure compositional generalization. CFQ is a simple yet realistic, large dataset of natural language questions and answers that also provides for each question a corresponding SPARQL query against the Freebase knowledge base. This means that CFQ can also be used for semantic parsing. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages English (`en`). ## Dataset Structure ### Data Instances #### mcd1 - **Size of downloaded dataset files:** 267.60 MB - **Size of the generated dataset:** 42.90 MB - **Total amount of disk used:** 310.49 MB An example of 'train' looks as follows. ``` { 'query': 'SELECT count(*) WHERE {\n?x0 a ns:people.person .\n?x0 ns:influence.influence_node.influenced M1 .\n?x0 ns:influence.influence_node.influenced M2 .\n?x0 ns:people.person.spouse_s/ns:people.marriage.spouse|ns:fictional_universe.fictional_character.married_to/ns:fictional_universe.marriage_of_fictional_characters.spouses ?x1 .\n?x1 a ns:film.cinematographer .\nFILTER ( ?x0 != ?x1 )\n}', 'question': 'Did a person marry a cinematographer , influence M1 , and influence M2' } ``` #### mcd2 - **Size of downloaded dataset files:** 267.60 MB - **Size of the generated dataset:** 44.77 MB - **Total amount of disk used:** 312.38 MB An example of 'train' looks as follows. ``` { 'query': 'SELECT count(*) WHERE {\n?x0 ns:people.person.parents|ns:fictional_universe.fictional_character.parents|ns:organization.organization.parent/ns:organization.organization_relationship.parent ?x1 .\n?x1 a ns:people.person .\nM1 ns:business.employer.employees/ns:business.employment_tenure.person ?x0 .\nM1 ns:business.employer.employees/ns:business.employment_tenure.person M2 .\nM1 ns:business.employer.employees/ns:business.employment_tenure.person M3 .\nM1 ns:business.employer.employees/ns:business.employment_tenure.person M4 .\nM5 ns:business.employer.employees/ns:business.employment_tenure.person ?x0 .\nM5 ns:business.employer.employees/ns:business.employment_tenure.person M2 .\nM5 ns:business.employer.employees/ns:business.employment_tenure.person M3 .\nM5 ns:business.employer.employees/ns:business.employment_tenure.person M4\n}', 'question': "Did M1 and M5 employ M2 , M3 , and M4 and employ a person 's child" } ``` #### mcd3 - **Size of downloaded dataset files:** 267.60 MB - **Size of the generated dataset:** 43.60 MB - **Total amount of disk used:** 311.20 MB An example of 'train' looks as follows. ``` { "query": "SELECT /producer M0 . /director M0 . ", "question": "Who produced and directed M0?" } ``` #### query_complexity_split - **Size of downloaded dataset files:** 267.60 MB - **Size of the generated dataset:** 45.95 MB - **Total amount of disk used:** 313.55 MB An example of 'train' looks as follows. ``` { "query": "SELECT /producer M0 . /director M0 . ", "question": "Who produced and directed M0?" } ``` #### query_pattern_split - **Size of downloaded dataset files:** 267.60 MB - **Size of the generated dataset:** 46.12 MB - **Total amount of disk used:** 313.72 MB An example of 'train' looks as follows. ``` { "query": "SELECT /producer M0 . /director M0 . ", "question": "Who produced and directed M0?" } ``` ### Data Fields The data fields are the same among all splits and configurations: - `question`: a `string` feature. - `query`: a `string` feature. ### Data Splits | name | train | test | |---------------------------|-------:|------:| | mcd1 | 95743 | 11968 | | mcd2 | 95743 | 11968 | | mcd3 | 95743 | 11968 | | query_complexity_split | 100654 | 9512 | | query_pattern_split | 94600 | 12589 | | question_complexity_split | 98999 | 10340 | | question_pattern_split | 95654 | 11909 | | random_split | 95744 | 11967 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{Keysers2020, title={Measuring Compositional Generalization: A Comprehensive Method on Realistic Data}, author={Daniel Keysers and Nathanael Sch"{a}rli and Nathan Scales and Hylke Buisman and Daniel Furrer and Sergii Kashubin and Nikola Momchev and Danila Sinopalnikov and Lukasz Stafiniak and Tibor Tihon and Dmitry Tsarkov and Xiao Wang and Marc van Zee and Olivier Bousquet}, booktitle={ICLR}, year={2020}, url={https://arxiv.org/abs/1912.09713.pdf}, } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@brainshawn](https://github.com/brainshawn) for adding this dataset.
cfq
[ "task_categories:question-answering", "task_categories:other", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "annotations_creators:no-annotation", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-4.0", "compositionality", "arxiv:1912.09713", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["question-answering", "other"], "task_ids": ["open-domain-qa", "closed-domain-qa"], "paperswithcode_id": "cfq", "pretty_name": "Compositional Freebase Questions", "tags": ["compositionality"], "dataset_info": [{"config_name": "mcd1", "features": [{"name": "question", "dtype": "string"}, {"name": "query", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 37408806, "num_examples": 95743}, {"name": "test", "num_bytes": 5446503, "num_examples": 11968}], "download_size": 8570962, "dataset_size": 42855309}, {"config_name": "mcd2", "features": [{"name": "question", "dtype": "string"}, {"name": "query", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 39424657, "num_examples": 95743}, {"name": "test", "num_bytes": 5314019, "num_examples": 11968}], "download_size": 8867866, "dataset_size": 44738676}, {"config_name": "mcd3", "features": [{"name": "question", "dtype": "string"}, {"name": "query", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 38316345, "num_examples": 95743}, {"name": "test", "num_bytes": 5244503, "num_examples": 11968}], "download_size": 8578142, "dataset_size": 43560848}, {"config_name": "query_complexity_split", "features": [{"name": "question", "dtype": "string"}, {"name": "query", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 40270175, "num_examples": 100654}, {"name": "test", "num_bytes": 5634924, "num_examples": 9512}], "download_size": 9303588, "dataset_size": 45905099}, {"config_name": "query_pattern_split", "features": [{"name": "question", "dtype": "string"}, {"name": "query", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 40811284, "num_examples": 94600}, {"name": "test", "num_bytes": 5268358, "num_examples": 12589}], "download_size": 9387759, "dataset_size": 46079642}, {"config_name": "question_complexity_split", "features": [{"name": "question", "dtype": "string"}, {"name": "query", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 39989433, "num_examples": 98999}, {"name": "test", "num_bytes": 5781561, "num_examples": 10340}], "download_size": 9255771, "dataset_size": 45770994}, {"config_name": "question_pattern_split", "features": [{"name": "question", "dtype": "string"}, {"name": "query", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 41217350, "num_examples": 95654}, {"name": "test", "num_bytes": 5179936, "num_examples": 11909}], "download_size": 9482990, "dataset_size": 46397286}, {"config_name": "random_split", "features": [{"name": "question", "dtype": "string"}, {"name": "query", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 41279218, "num_examples": 95744}, {"name": "test", "num_bytes": 5164923, "num_examples": 11967}], "download_size": 9533853, "dataset_size": 46444141}], "configs": [{"config_name": "mcd1", "data_files": [{"split": "train", "path": "mcd1/train-*"}, {"split": "test", "path": "mcd1/test-*"}]}, {"config_name": "mcd2", "data_files": [{"split": "train", "path": "mcd2/train-*"}, {"split": "test", "path": "mcd2/test-*"}]}, {"config_name": "mcd3", "data_files": [{"split": "train", "path": "mcd3/train-*"}, {"split": "test", "path": "mcd3/test-*"}]}, {"config_name": "query_complexity_split", "data_files": [{"split": "train", "path": "query_complexity_split/train-*"}, {"split": "test", "path": "query_complexity_split/test-*"}]}, {"config_name": "query_pattern_split", "data_files": [{"split": "train", "path": "query_pattern_split/train-*"}, {"split": "test", "path": "query_pattern_split/test-*"}]}, {"config_name": "question_complexity_split", "data_files": [{"split": "train", "path": "question_complexity_split/train-*"}, {"split": "test", "path": "question_complexity_split/test-*"}]}, {"config_name": "question_pattern_split", "data_files": [{"split": "train", "path": "question_pattern_split/train-*"}, {"split": "test", "path": "question_pattern_split/test-*"}]}, {"config_name": "random_split", "data_files": [{"split": "train", "path": "random_split/train-*"}, {"split": "test", "path": "random_split/test-*"}]}]}
2024-01-18T14:16:34+00:00
[ "1912.09713" ]
[ "en" ]
TAGS #task_categories-question-answering #task_categories-other #task_ids-open-domain-qa #task_ids-closed-domain-qa #annotations_creators-no-annotation #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #compositionality #arxiv-1912.09713 #region-us
Dataset Card for "cfq" ====================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: URL * Point of Contact: * Size of downloaded dataset files: 2.14 GB * Size of the generated dataset: 362.07 MB * Total amount of disk used: 2.50 GB ### Dataset Summary The Compositional Freebase Questions (CFQ) is a dataset that is specifically designed to measure compositional generalization. CFQ is a simple yet realistic, large dataset of natural language questions and answers that also provides for each question a corresponding SPARQL query against the Freebase knowledge base. This means that CFQ can also be used for semantic parsing. ### Supported Tasks and Leaderboards ### Languages English ('en'). Dataset Structure ----------------- ### Data Instances #### mcd1 * Size of downloaded dataset files: 267.60 MB * Size of the generated dataset: 42.90 MB * Total amount of disk used: 310.49 MB An example of 'train' looks as follows. #### mcd2 * Size of downloaded dataset files: 267.60 MB * Size of the generated dataset: 44.77 MB * Total amount of disk used: 312.38 MB An example of 'train' looks as follows. #### mcd3 * Size of downloaded dataset files: 267.60 MB * Size of the generated dataset: 43.60 MB * Total amount of disk used: 311.20 MB An example of 'train' looks as follows. #### query\_complexity\_split * Size of downloaded dataset files: 267.60 MB * Size of the generated dataset: 45.95 MB * Total amount of disk used: 313.55 MB An example of 'train' looks as follows. #### query\_pattern\_split * Size of downloaded dataset files: 267.60 MB * Size of the generated dataset: 46.12 MB * Total amount of disk used: 313.72 MB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits and configurations: * 'question': a 'string' feature. * 'query': a 'string' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @thomwolf, @patrickvonplaten, @lewtun, @brainshawn for adding this dataset.
[ "### Dataset Summary\n\n\nThe Compositional Freebase Questions (CFQ) is a dataset that is specifically designed to measure compositional\ngeneralization. CFQ is a simple yet realistic, large dataset of natural language questions and answers that also\nprovides for each question a corresponding SPARQL query against the Freebase knowledge base. This means that CFQ can\nalso be used for semantic parsing.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nEnglish ('en').\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### mcd1\n\n\n* Size of downloaded dataset files: 267.60 MB\n* Size of the generated dataset: 42.90 MB\n* Total amount of disk used: 310.49 MB\n\n\nAn example of 'train' looks as follows.", "#### mcd2\n\n\n* Size of downloaded dataset files: 267.60 MB\n* Size of the generated dataset: 44.77 MB\n* Total amount of disk used: 312.38 MB\n\n\nAn example of 'train' looks as follows.", "#### mcd3\n\n\n* Size of downloaded dataset files: 267.60 MB\n* Size of the generated dataset: 43.60 MB\n* Total amount of disk used: 311.20 MB\n\n\nAn example of 'train' looks as follows.", "#### query\\_complexity\\_split\n\n\n* Size of downloaded dataset files: 267.60 MB\n* Size of the generated dataset: 45.95 MB\n* Total amount of disk used: 313.55 MB\n\n\nAn example of 'train' looks as follows.", "#### query\\_pattern\\_split\n\n\n* Size of downloaded dataset files: 267.60 MB\n* Size of the generated dataset: 46.12 MB\n* Total amount of disk used: 313.72 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits and configurations:\n\n\n* 'question': a 'string' feature.\n* 'query': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @thomwolf, @patrickvonplaten, @lewtun, @brainshawn for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_categories-other #task_ids-open-domain-qa #task_ids-closed-domain-qa #annotations_creators-no-annotation #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #compositionality #arxiv-1912.09713 #region-us \n", "### Dataset Summary\n\n\nThe Compositional Freebase Questions (CFQ) is a dataset that is specifically designed to measure compositional\ngeneralization. CFQ is a simple yet realistic, large dataset of natural language questions and answers that also\nprovides for each question a corresponding SPARQL query against the Freebase knowledge base. This means that CFQ can\nalso be used for semantic parsing.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nEnglish ('en').\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### mcd1\n\n\n* Size of downloaded dataset files: 267.60 MB\n* Size of the generated dataset: 42.90 MB\n* Total amount of disk used: 310.49 MB\n\n\nAn example of 'train' looks as follows.", "#### mcd2\n\n\n* Size of downloaded dataset files: 267.60 MB\n* Size of the generated dataset: 44.77 MB\n* Total amount of disk used: 312.38 MB\n\n\nAn example of 'train' looks as follows.", "#### mcd3\n\n\n* Size of downloaded dataset files: 267.60 MB\n* Size of the generated dataset: 43.60 MB\n* Total amount of disk used: 311.20 MB\n\n\nAn example of 'train' looks as follows.", "#### query\\_complexity\\_split\n\n\n* Size of downloaded dataset files: 267.60 MB\n* Size of the generated dataset: 45.95 MB\n* Total amount of disk used: 313.55 MB\n\n\nAn example of 'train' looks as follows.", "#### query\\_pattern\\_split\n\n\n* Size of downloaded dataset files: 267.60 MB\n* Size of the generated dataset: 46.12 MB\n* Total amount of disk used: 313.72 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits and configurations:\n\n\n* 'question': a 'string' feature.\n* 'query': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @thomwolf, @patrickvonplaten, @lewtun, @brainshawn for adding this dataset." ]
[ 129, 89, 10, 17, 6, 55, 55, 56, 63, 62, 44, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 34 ]
[ "passage: TAGS\n#task_categories-question-answering #task_categories-other #task_ids-open-domain-qa #task_ids-closed-domain-qa #annotations_creators-no-annotation #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #compositionality #arxiv-1912.09713 #region-us \n### Dataset Summary\n\n\nThe Compositional Freebase Questions (CFQ) is a dataset that is specifically designed to measure compositional\ngeneralization. CFQ is a simple yet realistic, large dataset of natural language questions and answers that also\nprovides for each question a corresponding SPARQL query against the Freebase knowledge base. This means that CFQ can\nalso be used for semantic parsing.### Supported Tasks and Leaderboards### Languages\n\n\nEnglish ('en').\n\n\nDataset Structure\n-----------------### Data Instances#### mcd1\n\n\n* Size of downloaded dataset files: 267.60 MB\n* Size of the generated dataset: 42.90 MB\n* Total amount of disk used: 310.49 MB\n\n\nAn example of 'train' looks as follows.#### mcd2\n\n\n* Size of downloaded dataset files: 267.60 MB\n* Size of the generated dataset: 44.77 MB\n* Total amount of disk used: 312.38 MB\n\n\nAn example of 'train' looks as follows.#### mcd3\n\n\n* Size of downloaded dataset files: 267.60 MB\n* Size of the generated dataset: 43.60 MB\n* Total amount of disk used: 311.20 MB\n\n\nAn example of 'train' looks as follows.#### query\\_complexity\\_split\n\n\n* Size of downloaded dataset files: 267.60 MB\n* Size of the generated dataset: 45.95 MB\n* Total amount of disk used: 313.55 MB\n\n\nAn example of 'train' looks as follows." ]
1b111eca2b6f2c08ff347b916a3b9cf05642a135
# Dataset Card for ChrEn ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [Github repository for ChrEn](https://github.com/ZhangShiyue/ChrEn) - **Paper:** [ChrEn: Cherokee-English Machine Translation for Endangered Language Revitalization](https://arxiv.org/abs/2010.04791) - **Point of Contact:** [benfrey@email.unc.edu](benfrey@email.unc.edu) ### Dataset Summary ChrEn is a Cherokee-English parallel dataset to facilitate machine translation research between Cherokee and English. ChrEn is extremely low-resource contains 14k sentence pairs in total, split in ways that facilitate both in-domain and out-of-domain evaluation. ChrEn also contains 5k Cherokee monolingual data to enable semi-supervised learning. ### Supported Tasks and Leaderboards The dataset is intended to use for `machine-translation` between Enlish (`en`) and Cherokee (`chr`). ### Languages The dataset contains Enlish (`en`) and Cherokee (`chr`) text. The data encompasses both existing dialects of Cherokee: the Overhill dialect, mostly spoken in Oklahoma (OK), and the Middle dialect, mostly used in North Carolina (NC). ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization Many of the source texts were translations of English materials, which means that the Cherokee structures may not be 100% natural in terms of what a speaker might spontaneously produce. Each text was translated by people who speak Cherokee as the first language, which means there is a high probability of grammaticality. These data were originally available in PDF version. We apply the Optical Character Recognition (OCR) via Tesseract OCR engine to extract the Cherokee and English text. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? The sentences were manually aligned by Dr. Benjamin Frey a proficient second-language speaker of Cherokee, who also fixed the errors introduced by OCR. This process is time-consuming and took several months. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The dataset was gathered and annotated by Shiyue Zhang, Benjamin Frey, and Mohit Bansal at UNC Chapel Hill. ### Licensing Information The copyright of the data belongs to original book/article authors or translators (hence, used for research purpose; and please contact Dr. Benjamin Frey for other copyright questions). ### Citation Information ``` @inproceedings{zhang2020chren, title={ChrEn: Cherokee-English Machine Translation for Endangered Language Revitalization}, author={Zhang, Shiyue and Frey, Benjamin and Bansal, Mohit}, booktitle={EMNLP2020}, year={2020} } ``` ### Contributions Thanks to [@yjernite](https://github.com/yjernite), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
chr_en
[ "task_categories:fill-mask", "task_categories:text-generation", "task_categories:translation", "task_ids:language-modeling", "task_ids:masked-language-modeling", "annotations_creators:expert-generated", "annotations_creators:found", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "multilinguality:multilingual", "multilinguality:translation", "size_categories:100K<n<1M", "size_categories:10K<n<100K", "size_categories:1K<n<10K", "source_datasets:original", "language:chr", "language:en", "license:other", "arxiv:2010.04791", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated", "found", "no-annotation"], "language_creators": ["found"], "language": ["chr", "en"], "license": ["other"], "multilinguality": ["monolingual", "multilingual", "translation"], "size_categories": ["100K<n<1M", "10K<n<100K", "1K<n<10K"], "source_datasets": ["original"], "task_categories": ["fill-mask", "text-generation", "translation"], "task_ids": ["language-modeling", "masked-language-modeling"], "paperswithcode_id": "chren", "config_names": ["monolingual", "monolingual_raw", "parallel", "parallel_raw"], "dataset_info": [{"config_name": "monolingual", "features": [{"name": "sentence", "dtype": "string"}], "splits": [{"name": "chr", "num_bytes": 882824, "num_examples": 5210}, {"name": "en5000", "num_bytes": 615275, "num_examples": 5000}, {"name": "en10000", "num_bytes": 1211605, "num_examples": 10000}, {"name": "en20000", "num_bytes": 2432298, "num_examples": 20000}, {"name": "en50000", "num_bytes": 6065580, "num_examples": 49999}, {"name": "en100000", "num_bytes": 12130164, "num_examples": 100000}], "download_size": 16967664, "dataset_size": 23337746}, {"config_name": "monolingual_raw", "features": [{"name": "text_sentence", "dtype": "string"}, {"name": "text_title", "dtype": "string"}, {"name": "speaker", "dtype": "string"}, {"name": "date", "dtype": "int32"}, {"name": "type", "dtype": "string"}, {"name": "dialect", "dtype": "string"}], "splits": [{"name": "full", "num_bytes": 1210056, "num_examples": 5210}], "download_size": 410646, "dataset_size": 1210056}, {"config_name": "parallel", "features": [{"name": "sentence_pair", "dtype": {"translation": {"languages": ["en", "chr"]}}}], "splits": [{"name": "train", "num_bytes": 3089562, "num_examples": 11639}, {"name": "dev", "num_bytes": 260401, "num_examples": 1000}, {"name": "out_dev", "num_bytes": 78126, "num_examples": 256}, {"name": "test", "num_bytes": 264595, "num_examples": 1000}, {"name": "out_test", "num_bytes": 80959, "num_examples": 256}], "download_size": 2143266, "dataset_size": 3773643}, {"config_name": "parallel_raw", "features": [{"name": "line_number", "dtype": "string"}, {"name": "sentence_pair", "dtype": {"translation": {"languages": ["en", "chr"]}}}, {"name": "text_title", "dtype": "string"}, {"name": "speaker", "dtype": "string"}, {"name": "date", "dtype": "int32"}, {"name": "type", "dtype": "string"}, {"name": "dialect", "dtype": "string"}], "splits": [{"name": "full", "num_bytes": 5010734, "num_examples": 14151}], "download_size": 2018726, "dataset_size": 5010734}], "configs": [{"config_name": "monolingual", "data_files": [{"split": "chr", "path": "monolingual/chr-*"}, {"split": "en5000", "path": "monolingual/en5000-*"}, {"split": "en10000", "path": "monolingual/en10000-*"}, {"split": "en20000", "path": "monolingual/en20000-*"}, {"split": "en50000", "path": "monolingual/en50000-*"}, {"split": "en100000", "path": "monolingual/en100000-*"}]}, {"config_name": "monolingual_raw", "data_files": [{"split": "full", "path": "monolingual_raw/full-*"}]}, {"config_name": "parallel", "data_files": [{"split": "train", "path": "parallel/train-*"}, {"split": "dev", "path": "parallel/dev-*"}, {"split": "out_dev", "path": "parallel/out_dev-*"}, {"split": "test", "path": "parallel/test-*"}, {"split": "out_test", "path": "parallel/out_test-*"}], "default": true}, {"config_name": "parallel_raw", "data_files": [{"split": "full", "path": "parallel_raw/full-*"}]}]}
2024-01-18T14:19:36+00:00
[ "2010.04791" ]
[ "chr", "en" ]
TAGS #task_categories-fill-mask #task_categories-text-generation #task_categories-translation #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-expert-generated #annotations_creators-found #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #multilinguality-multilingual #multilinguality-translation #size_categories-100K<n<1M #size_categories-10K<n<100K #size_categories-1K<n<10K #source_datasets-original #language-Cherokee #language-English #license-other #arxiv-2010.04791 #region-us
# Dataset Card for ChrEn ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Repository: Github repository for ChrEn - Paper: ChrEn: Cherokee-English Machine Translation for Endangered Language Revitalization - Point of Contact: benfrey@URL ### Dataset Summary ChrEn is a Cherokee-English parallel dataset to facilitate machine translation research between Cherokee and English. ChrEn is extremely low-resource contains 14k sentence pairs in total, split in ways that facilitate both in-domain and out-of-domain evaluation. ChrEn also contains 5k Cherokee monolingual data to enable semi-supervised learning. ### Supported Tasks and Leaderboards The dataset is intended to use for 'machine-translation' between Enlish ('en') and Cherokee ('chr'). ### Languages The dataset contains Enlish ('en') and Cherokee ('chr') text. The data encompasses both existing dialects of Cherokee: the Overhill dialect, mostly spoken in Oklahoma (OK), and the Middle dialect, mostly used in North Carolina (NC). ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization Many of the source texts were translations of English materials, which means that the Cherokee structures may not be 100% natural in terms of what a speaker might spontaneously produce. Each text was translated by people who speak Cherokee as the first language, which means there is a high probability of grammaticality. These data were originally available in PDF version. We apply the Optical Character Recognition (OCR) via Tesseract OCR engine to extract the Cherokee and English text. #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? The sentences were manually aligned by Dr. Benjamin Frey a proficient second-language speaker of Cherokee, who also fixed the errors introduced by OCR. This process is time-consuming and took several months. ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators The dataset was gathered and annotated by Shiyue Zhang, Benjamin Frey, and Mohit Bansal at UNC Chapel Hill. ### Licensing Information The copyright of the data belongs to original book/article authors or translators (hence, used for research purpose; and please contact Dr. Benjamin Frey for other copyright questions). ### Contributions Thanks to @yjernite, @lhoestq for adding this dataset.
[ "# Dataset Card for ChrEn", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Repository: Github repository for ChrEn\n- Paper: ChrEn: Cherokee-English Machine Translation for Endangered Language Revitalization\n- Point of Contact: benfrey@URL", "### Dataset Summary\n\nChrEn is a Cherokee-English parallel dataset to facilitate machine translation research between Cherokee and English.\nChrEn is extremely low-resource contains 14k sentence pairs in total, split in ways that facilitate both in-domain and out-of-domain evaluation.\nChrEn also contains 5k Cherokee monolingual data to enable semi-supervised learning.", "### Supported Tasks and Leaderboards\n\nThe dataset is intended to use for 'machine-translation' between Enlish ('en') and Cherokee ('chr').", "### Languages\n\nThe dataset contains Enlish ('en') and Cherokee ('chr') text. The data encompasses both existing dialects of Cherokee: the Overhill dialect, mostly spoken in Oklahoma (OK), and the Middle dialect, mostly used in North Carolina (NC).", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\nMany of the source texts were translations of English materials, which means that the Cherokee structures may not be 100% natural in terms of what a speaker might spontaneously produce. Each text was translated by people who speak Cherokee as the first language, which means there is a high probability of grammaticality. These data were originally available in PDF version. We apply the Optical Character Recognition (OCR) via Tesseract OCR engine to extract the Cherokee and English text.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?\n\nThe sentences were manually aligned by Dr. Benjamin Frey a proficient second-language speaker of Cherokee, who also fixed the errors introduced by OCR. This process is time-consuming and took several months.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThe dataset was gathered and annotated by Shiyue Zhang, Benjamin Frey, and Mohit Bansal at UNC Chapel Hill.", "### Licensing Information\n\nThe copyright of the data belongs to original book/article authors or translators (hence, used for research purpose; and please contact Dr. Benjamin Frey for other copyright questions).", "### Contributions\n\nThanks to @yjernite, @lhoestq for adding this dataset." ]
[ "TAGS\n#task_categories-fill-mask #task_categories-text-generation #task_categories-translation #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-expert-generated #annotations_creators-found #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #multilinguality-multilingual #multilinguality-translation #size_categories-100K<n<1M #size_categories-10K<n<100K #size_categories-1K<n<10K #source_datasets-original #language-Cherokee #language-English #license-other #arxiv-2010.04791 #region-us \n", "# Dataset Card for ChrEn", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Repository: Github repository for ChrEn\n- Paper: ChrEn: Cherokee-English Machine Translation for Endangered Language Revitalization\n- Point of Contact: benfrey@URL", "### Dataset Summary\n\nChrEn is a Cherokee-English parallel dataset to facilitate machine translation research between Cherokee and English.\nChrEn is extremely low-resource contains 14k sentence pairs in total, split in ways that facilitate both in-domain and out-of-domain evaluation.\nChrEn also contains 5k Cherokee monolingual data to enable semi-supervised learning.", "### Supported Tasks and Leaderboards\n\nThe dataset is intended to use for 'machine-translation' between Enlish ('en') and Cherokee ('chr').", "### Languages\n\nThe dataset contains Enlish ('en') and Cherokee ('chr') text. The data encompasses both existing dialects of Cherokee: the Overhill dialect, mostly spoken in Oklahoma (OK), and the Middle dialect, mostly used in North Carolina (NC).", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\nMany of the source texts were translations of English materials, which means that the Cherokee structures may not be 100% natural in terms of what a speaker might spontaneously produce. Each text was translated by people who speak Cherokee as the first language, which means there is a high probability of grammaticality. These data were originally available in PDF version. We apply the Optical Character Recognition (OCR) via Tesseract OCR engine to extract the Cherokee and English text.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?\n\nThe sentences were manually aligned by Dr. Benjamin Frey a proficient second-language speaker of Cherokee, who also fixed the errors introduced by OCR. This process is time-consuming and took several months.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThe dataset was gathered and annotated by Shiyue Zhang, Benjamin Frey, and Mohit Bansal at UNC Chapel Hill.", "### Licensing Information\n\nThe copyright of the data belongs to original book/article authors or translators (hence, used for research purpose; and please contact Dr. Benjamin Frey for other copyright questions).", "### Contributions\n\nThanks to @yjernite, @lhoestq for adding this dataset." ]
[ 195, 7, 120, 49, 93, 42, 69, 6, 6, 5, 5, 5, 7, 4, 121, 10, 5, 5, 60, 8, 8, 7, 8, 7, 5, 39, 47, 22 ]
[ "passage: TAGS\n#task_categories-fill-mask #task_categories-text-generation #task_categories-translation #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-expert-generated #annotations_creators-found #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #multilinguality-multilingual #multilinguality-translation #size_categories-100K<n<1M #size_categories-10K<n<100K #size_categories-1K<n<10K #source_datasets-original #language-Cherokee #language-English #license-other #arxiv-2010.04791 #region-us \n# Dataset Card for ChrEn## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Repository: Github repository for ChrEn\n- Paper: ChrEn: Cherokee-English Machine Translation for Endangered Language Revitalization\n- Point of Contact: benfrey@URL### Dataset Summary\n\nChrEn is a Cherokee-English parallel dataset to facilitate machine translation research between Cherokee and English.\nChrEn is extremely low-resource contains 14k sentence pairs in total, split in ways that facilitate both in-domain and out-of-domain evaluation.\nChrEn also contains 5k Cherokee monolingual data to enable semi-supervised learning.### Supported Tasks and Leaderboards\n\nThe dataset is intended to use for 'machine-translation' between Enlish ('en') and Cherokee ('chr')." ]
0b2714987fa478483af9968de7c934580d0bb9a2
# Dataset Card for CIFAR-10 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.cs.toronto.edu/~kriz/cifar.html - **Repository:** - **Paper:** Learning Multiple Layers of Features from Tiny Images by Alex Krizhevsky - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. The dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class. ### Supported Tasks and Leaderboards - `image-classification`: The goal of this task is to classify a given image into one of 10 classes. The leaderboard is available [here](https://paperswithcode.com/sota/image-classification-on-cifar-10). ### Languages English ## Dataset Structure ### Data Instances A sample from the training set is provided below: ``` { 'img': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=32x32 at 0x201FA6EE748>, 'label': 0 } ``` ### Data Fields - img: A `PIL.Image.Image` object containing the 32x32 image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]` - label: 0-9 with the following correspondence 0 airplane 1 automobile 2 bird 3 cat 4 deer 5 dog 6 frog 7 horse 8 ship 9 truck ### Data Splits Train and Test ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @TECHREPORT{Krizhevsky09learningmultiple, author = {Alex Krizhevsky}, title = {Learning multiple layers of features from tiny images}, institution = {}, year = {2009} } ``` ### Contributions Thanks to [@czabo](https://github.com/czabo) for adding this dataset.
cifar10
[ "task_categories:image-classification", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|other-80-Million-Tiny-Images", "language:en", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|other-80-Million-Tiny-Images"], "task_categories": ["image-classification"], "task_ids": [], "paperswithcode_id": "cifar-10", "pretty_name": "Cifar10", "dataset_info": {"config_name": "plain_text", "features": [{"name": "img", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "airplane", "1": "automobile", "2": "bird", "3": "cat", "4": "deer", "5": "dog", "6": "frog", "7": "horse", "8": "ship", "9": "truck"}}}}], "splits": [{"name": "train", "num_bytes": 113648310.0, "num_examples": 50000}, {"name": "test", "num_bytes": 22731580.0, "num_examples": 10000}], "download_size": 143646105, "dataset_size": 136379890.0}, "configs": [{"config_name": "plain_text", "data_files": [{"split": "train", "path": "plain_text/train-*"}, {"split": "test", "path": "plain_text/test-*"}], "default": true}]}
2024-01-04T06:53:11+00:00
[]
[ "en" ]
TAGS #task_categories-image-classification #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-80-Million-Tiny-Images #language-English #license-unknown #region-us
# Dataset Card for CIFAR-10 ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: Learning Multiple Layers of Features from Tiny Images by Alex Krizhevsky - Leaderboard: - Point of Contact: ### Dataset Summary The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. The dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class. ### Supported Tasks and Leaderboards - 'image-classification': The goal of this task is to classify a given image into one of 10 classes. The leaderboard is available here. ### Languages English ## Dataset Structure ### Data Instances A sample from the training set is provided below: ### Data Fields - img: A 'PIL.Image.Image' object containing the 32x32 image. Note that when accessing the image column: 'dataset[0]["image"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '"image"' column, *i.e.* 'dataset[0]["image"]' should always be preferred over 'dataset["image"][0]' - label: 0-9 with the following correspondence 0 airplane 1 automobile 2 bird 3 cat 4 deer 5 dog 6 frog 7 horse 8 ship 9 truck ### Data Splits Train and Test ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @czabo for adding this dataset.
[ "# Dataset Card for CIFAR-10", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: Learning Multiple Layers of Features from Tiny Images by Alex Krizhevsky\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThe CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.\nThe dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class.", "### Supported Tasks and Leaderboards\n\n- 'image-classification': The goal of this task is to classify a given image into one of 10 classes. The leaderboard is available here.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nA sample from the training set is provided below:", "### Data Fields\n\n- img: A 'PIL.Image.Image' object containing the 32x32 image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'\n- label: 0-9 with the following correspondence\n 0 airplane\n 1 automobile\n 2 bird\n 3 cat\n 4 deer\n 5 dog\n 6 frog\n 7 horse\n 8 ship\n 9 truck", "### Data Splits\n\nTrain and Test", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @czabo for adding this dataset." ]
[ "TAGS\n#task_categories-image-classification #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-80-Million-Tiny-Images #language-English #license-unknown #region-us \n", "# Dataset Card for CIFAR-10", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: Learning Multiple Layers of Features from Tiny Images by Alex Krizhevsky\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThe CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.\nThe dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class.", "### Supported Tasks and Leaderboards\n\n- 'image-classification': The goal of this task is to classify a given image into one of 10 classes. The leaderboard is available here.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nA sample from the training set is provided below:", "### Data Fields\n\n- img: A 'PIL.Image.Image' object containing the 32x32 image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'\n- label: 0-9 with the following correspondence\n 0 airplane\n 1 automobile\n 2 bird\n 3 cat\n 4 deer\n 5 dog\n 6 frog\n 7 horse\n 8 ship\n 9 truck", "### Data Splits\n\nTrain and Test", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @czabo for adding this dataset." ]
[ 91, 8, 120, 44, 128, 43, 5, 6, 16, 167, 8, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 16 ]
[ "passage: TAGS\n#task_categories-image-classification #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-80-Million-Tiny-Images #language-English #license-unknown #region-us \n# Dataset Card for CIFAR-10## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: Learning Multiple Layers of Features from Tiny Images by Alex Krizhevsky\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\nThe CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.\nThe dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class.### Supported Tasks and Leaderboards\n\n- 'image-classification': The goal of this task is to classify a given image into one of 10 classes. The leaderboard is available here.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nA sample from the training set is provided below:" ]
aadb3af77e9048adbea6b47c21a81e47dd092ae5
# Dataset Card for CIFAR-100 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [CIFAR Datasets](https://www.cs.toronto.edu/~kriz/cifar.html) - **Repository:** - **Paper:** [Paper](https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The CIFAR-100 dataset consists of 60000 32x32 colour images in 100 classes, with 600 images per class. There are 500 training images and 100 testing images per class. There are 50000 training images and 10000 test images. The 100 classes are grouped into 20 superclasses. There are two labels per image - fine label (actual class) and coarse label (superclass). ### Supported Tasks and Leaderboards - `image-classification`: The goal of this task is to classify a given image into one of 100 classes. The leaderboard is available [here](https://paperswithcode.com/sota/image-classification-on-cifar-100). ### Languages English ## Dataset Structure ### Data Instances A sample from the training set is provided below: ``` { 'img': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=32x32 at 0x2767F58E080>, 'fine_label': 19, 'coarse_label': 11 } ``` ### Data Fields - `img`: A `PIL.Image.Image` object containing the 32x32 image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]` - `fine_label`: an `int` classification label with the following mapping: `0`: apple `1`: aquarium_fish `2`: baby `3`: bear `4`: beaver `5`: bed `6`: bee `7`: beetle `8`: bicycle `9`: bottle `10`: bowl `11`: boy `12`: bridge `13`: bus `14`: butterfly `15`: camel `16`: can `17`: castle `18`: caterpillar `19`: cattle `20`: chair `21`: chimpanzee `22`: clock `23`: cloud `24`: cockroach `25`: couch `26`: cra `27`: crocodile `28`: cup `29`: dinosaur `30`: dolphin `31`: elephant `32`: flatfish `33`: forest `34`: fox `35`: girl `36`: hamster `37`: house `38`: kangaroo `39`: keyboard `40`: lamp `41`: lawn_mower `42`: leopard `43`: lion `44`: lizard `45`: lobster `46`: man `47`: maple_tree `48`: motorcycle `49`: mountain `50`: mouse `51`: mushroom `52`: oak_tree `53`: orange `54`: orchid `55`: otter `56`: palm_tree `57`: pear `58`: pickup_truck `59`: pine_tree `60`: plain `61`: plate `62`: poppy `63`: porcupine `64`: possum `65`: rabbit `66`: raccoon `67`: ray `68`: road `69`: rocket `70`: rose `71`: sea `72`: seal `73`: shark `74`: shrew `75`: skunk `76`: skyscraper `77`: snail `78`: snake `79`: spider `80`: squirrel `81`: streetcar `82`: sunflower `83`: sweet_pepper `84`: table `85`: tank `86`: telephone `87`: television `88`: tiger `89`: tractor `90`: train `91`: trout `92`: tulip `93`: turtle `94`: wardrobe `95`: whale `96`: willow_tree `97`: wolf `98`: woman `99`: worm - `coarse_label`: an `int` coarse classification label with following mapping: `0`: aquatic_mammals `1`: fish `2`: flowers `3`: food_containers `4`: fruit_and_vegetables `5`: household_electrical_devices `6`: household_furniture `7`: insects `8`: large_carnivores `9`: large_man-made_outdoor_things `10`: large_natural_outdoor_scenes `11`: large_omnivores_and_herbivores `12`: medium_mammals `13`: non-insect_invertebrates `14`: people `15`: reptiles `16`: small_mammals `17`: trees `18`: vehicles_1 `19`: vehicles_2 ### Data Splits | name |train|test| |----------|----:|---------:| |cifar100|50000| 10000| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @TECHREPORT{Krizhevsky09learningmultiple, author = {Alex Krizhevsky}, title = {Learning multiple layers of features from tiny images}, institution = {}, year = {2009} } ``` ### Contributions Thanks to [@gchhablani](https://github.com/gchablani) for adding this dataset.
cifar100
[ "task_categories:image-classification", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|other-80-Million-Tiny-Images", "language:en", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|other-80-Million-Tiny-Images"], "task_categories": ["image-classification"], "task_ids": [], "paperswithcode_id": "cifar-100", "pretty_name": "Cifar100", "dataset_info": {"config_name": "cifar100", "features": [{"name": "img", "dtype": "image"}, {"name": "fine_label", "dtype": {"class_label": {"names": {"0": "apple", "1": "aquarium_fish", "2": "baby", "3": "bear", "4": "beaver", "5": "bed", "6": "bee", "7": "beetle", "8": "bicycle", "9": "bottle", "10": "bowl", "11": "boy", "12": "bridge", "13": "bus", "14": "butterfly", "15": "camel", "16": "can", "17": "castle", "18": "caterpillar", "19": "cattle", "20": "chair", "21": "chimpanzee", "22": "clock", "23": "cloud", "24": "cockroach", "25": "couch", "26": "cra", "27": "crocodile", "28": "cup", "29": "dinosaur", "30": "dolphin", "31": "elephant", "32": "flatfish", "33": "forest", "34": "fox", "35": "girl", "36": "hamster", "37": "house", "38": "kangaroo", "39": "keyboard", "40": "lamp", "41": "lawn_mower", "42": "leopard", "43": "lion", "44": "lizard", "45": "lobster", "46": "man", "47": "maple_tree", "48": "motorcycle", "49": "mountain", "50": "mouse", "51": "mushroom", "52": "oak_tree", "53": "orange", "54": "orchid", "55": "otter", "56": "palm_tree", "57": "pear", "58": "pickup_truck", "59": "pine_tree", "60": "plain", "61": "plate", "62": "poppy", "63": "porcupine", "64": "possum", "65": "rabbit", "66": "raccoon", "67": "ray", "68": "road", "69": "rocket", "70": "rose", "71": "sea", "72": "seal", "73": "shark", "74": "shrew", "75": "skunk", "76": "skyscraper", "77": "snail", "78": "snake", "79": "spider", "80": "squirrel", "81": "streetcar", "82": "sunflower", "83": "sweet_pepper", "84": "table", "85": "tank", "86": "telephone", "87": "television", "88": "tiger", "89": "tractor", "90": "train", "91": "trout", "92": "tulip", "93": "turtle", "94": "wardrobe", "95": "whale", "96": "willow_tree", "97": "wolf", "98": "woman", "99": "worm"}}}}, {"name": "coarse_label", "dtype": {"class_label": {"names": {"0": "aquatic_mammals", "1": "fish", "2": "flowers", "3": "food_containers", "4": "fruit_and_vegetables", "5": "household_electrical_devices", "6": "household_furniture", "7": "insects", "8": "large_carnivores", "9": "large_man-made_outdoor_things", "10": "large_natural_outdoor_scenes", "11": "large_omnivores_and_herbivores", "12": "medium_mammals", "13": "non-insect_invertebrates", "14": "people", "15": "reptiles", "16": "small_mammals", "17": "trees", "18": "vehicles_1", "19": "vehicles_2"}}}}], "splits": [{"name": "train", "num_bytes": 112545106.0, "num_examples": 50000}, {"name": "test", "num_bytes": 22564261.0, "num_examples": 10000}], "download_size": 142291368, "dataset_size": 135109367.0}, "configs": [{"config_name": "cifar100", "data_files": [{"split": "train", "path": "cifar100/train-*"}, {"split": "test", "path": "cifar100/test-*"}], "default": true}]}
2024-01-04T06:57:47+00:00
[]
[ "en" ]
TAGS #task_categories-image-classification #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-80-Million-Tiny-Images #language-English #license-unknown #region-us
Dataset Card for CIFAR-100 ========================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: CIFAR Datasets * Repository: * Paper: Paper * Leaderboard: * Point of Contact: ### Dataset Summary The CIFAR-100 dataset consists of 60000 32x32 colour images in 100 classes, with 600 images per class. There are 500 training images and 100 testing images per class. There are 50000 training images and 10000 test images. The 100 classes are grouped into 20 superclasses. There are two labels per image - fine label (actual class) and coarse label (superclass). ### Supported Tasks and Leaderboards * 'image-classification': The goal of this task is to classify a given image into one of 100 classes. The leaderboard is available here. ### Languages English Dataset Structure ----------------- ### Data Instances A sample from the training set is provided below: ### Data Fields * 'img': A 'PIL.Image.Image' object containing the 32x32 image. Note that when accessing the image column: 'dataset[0]["image"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '"image"' column, *i.e.* 'dataset[0]["image"]' should always be preferred over 'dataset["image"][0]' * 'fine\_label': an 'int' classification label with the following mapping: '0': apple '1': aquarium\_fish '2': baby '3': bear '4': beaver '5': bed '6': bee '7': beetle '8': bicycle '9': bottle '10': bowl '11': boy '12': bridge '13': bus '14': butterfly '15': camel '16': can '17': castle '18': caterpillar '19': cattle '20': chair '21': chimpanzee '22': clock '23': cloud '24': cockroach '25': couch '26': cra '27': crocodile '28': cup '29': dinosaur '30': dolphin '31': elephant '32': flatfish '33': forest '34': fox '35': girl '36': hamster '37': house '38': kangaroo '39': keyboard '40': lamp '41': lawn\_mower '42': leopard '43': lion '44': lizard '45': lobster '46': man '47': maple\_tree '48': motorcycle '49': mountain '50': mouse '51': mushroom '52': oak\_tree '53': orange '54': orchid '55': otter '56': palm\_tree '57': pear '58': pickup\_truck '59': pine\_tree '60': plain '61': plate '62': poppy '63': porcupine '64': possum '65': rabbit '66': raccoon '67': ray '68': road '69': rocket '70': rose '71': sea '72': seal '73': shark '74': shrew '75': skunk '76': skyscraper '77': snail '78': snake '79': spider '80': squirrel '81': streetcar '82': sunflower '83': sweet\_pepper '84': table '85': tank '86': telephone '87': television '88': tiger '89': tractor '90': train '91': trout '92': tulip '93': turtle '94': wardrobe '95': whale '96': willow\_tree '97': wolf '98': woman '99': worm * 'coarse\_label': an 'int' coarse classification label with following mapping: '0': aquatic\_mammals '1': fish '2': flowers '3': food\_containers '4': fruit\_and\_vegetables '5': household\_electrical\_devices '6': household\_furniture '7': insects '8': large\_carnivores '9': large\_man-made\_outdoor\_things '10': large\_natural\_outdoor\_scenes '11': large\_omnivores\_and\_herbivores '12': medium\_mammals '13': non-insect\_invertebrates '14': people '15': reptiles '16': small\_mammals '17': trees '18': vehicles\_1 '19': vehicles\_2 ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @gchhablani for adding this dataset.
[ "### Dataset Summary\n\n\nThe CIFAR-100 dataset consists of 60000 32x32 colour images in 100 classes, with 600 images\nper class. There are 500 training images and 100 testing images per class. There are 50000 training images and 10000 test images. The 100 classes are grouped into 20 superclasses.\nThere are two labels per image - fine label (actual class) and coarse label (superclass).", "### Supported Tasks and Leaderboards\n\n\n* 'image-classification': The goal of this task is to classify a given image into one of 100 classes. The leaderboard is available here.", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from the training set is provided below:", "### Data Fields\n\n\n* 'img': A 'PIL.Image.Image' object containing the 32x32 image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'\n* 'fine\\_label': an 'int' classification label with the following mapping:\n\n\n'0': apple\n\n\n'1': aquarium\\_fish\n\n\n'2': baby\n\n\n'3': bear\n\n\n'4': beaver\n\n\n'5': bed\n\n\n'6': bee\n\n\n'7': beetle\n\n\n'8': bicycle\n\n\n'9': bottle\n\n\n'10': bowl\n\n\n'11': boy\n\n\n'12': bridge\n\n\n'13': bus\n\n\n'14': butterfly\n\n\n'15': camel\n\n\n'16': can\n\n\n'17': castle\n\n\n'18': caterpillar\n\n\n'19': cattle\n\n\n'20': chair\n\n\n'21': chimpanzee\n\n\n'22': clock\n\n\n'23': cloud\n\n\n'24': cockroach\n\n\n'25': couch\n\n\n'26': cra\n\n\n'27': crocodile\n\n\n'28': cup\n\n\n'29': dinosaur\n\n\n'30': dolphin\n\n\n'31': elephant\n\n\n'32': flatfish\n\n\n'33': forest\n\n\n'34': fox\n\n\n'35': girl\n\n\n'36': hamster\n\n\n'37': house\n\n\n'38': kangaroo\n\n\n'39': keyboard\n\n\n'40': lamp\n\n\n'41': lawn\\_mower\n\n\n'42': leopard\n\n\n'43': lion\n\n\n'44': lizard\n\n\n'45': lobster\n\n\n'46': man\n\n\n'47': maple\\_tree\n\n\n'48': motorcycle\n\n\n'49': mountain\n\n\n'50': mouse\n\n\n'51': mushroom\n\n\n'52': oak\\_tree\n\n\n'53': orange\n\n\n'54': orchid\n\n\n'55': otter\n\n\n'56': palm\\_tree\n\n\n'57': pear\n\n\n'58': pickup\\_truck\n\n\n'59': pine\\_tree\n\n\n'60': plain\n\n\n'61': plate\n\n\n'62': poppy\n\n\n'63': porcupine\n\n\n'64': possum\n\n\n'65': rabbit\n\n\n'66': raccoon\n\n\n'67': ray\n\n\n'68': road\n\n\n'69': rocket\n\n\n'70': rose\n\n\n'71': sea\n\n\n'72': seal\n\n\n'73': shark\n\n\n'74': shrew\n\n\n'75': skunk\n\n\n'76': skyscraper\n\n\n'77': snail\n\n\n'78': snake\n\n\n'79': spider\n\n\n'80': squirrel\n\n\n'81': streetcar\n\n\n'82': sunflower\n\n\n'83': sweet\\_pepper\n\n\n'84': table\n\n\n'85': tank\n\n\n'86': telephone\n\n\n'87': television\n\n\n'88': tiger\n\n\n'89': tractor\n\n\n'90': train\n\n\n'91': trout\n\n\n'92': tulip\n\n\n'93': turtle\n\n\n'94': wardrobe\n\n\n'95': whale\n\n\n'96': willow\\_tree\n\n\n'97': wolf\n\n\n'98': woman\n\n\n'99': worm\n* 'coarse\\_label': an 'int' coarse classification label with following mapping:\n\n\n'0': aquatic\\_mammals\n\n\n'1': fish\n\n\n'2': flowers\n\n\n'3': food\\_containers\n\n\n'4': fruit\\_and\\_vegetables\n\n\n'5': household\\_electrical\\_devices\n\n\n'6': household\\_furniture\n\n\n'7': insects\n\n\n'8': large\\_carnivores\n\n\n'9': large\\_man-made\\_outdoor\\_things\n\n\n'10': large\\_natural\\_outdoor\\_scenes\n\n\n'11': large\\_omnivores\\_and\\_herbivores\n\n\n'12': medium\\_mammals\n\n\n'13': non-insect\\_invertebrates\n\n\n'14': people\n\n\n'15': reptiles\n\n\n'16': small\\_mammals\n\n\n'17': trees\n\n\n'18': vehicles\\_1\n\n\n'19': vehicles\\_2", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @gchhablani for adding this dataset." ]
[ "TAGS\n#task_categories-image-classification #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-80-Million-Tiny-Images #language-English #license-unknown #region-us \n", "### Dataset Summary\n\n\nThe CIFAR-100 dataset consists of 60000 32x32 colour images in 100 classes, with 600 images\nper class. There are 500 training images and 100 testing images per class. There are 50000 training images and 10000 test images. The 100 classes are grouped into 20 superclasses.\nThere are two labels per image - fine label (actual class) and coarse label (superclass).", "### Supported Tasks and Leaderboards\n\n\n* 'image-classification': The goal of this task is to classify a given image into one of 100 classes. The leaderboard is available here.", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from the training set is provided below:", "### Data Fields\n\n\n* 'img': A 'PIL.Image.Image' object containing the 32x32 image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'\n* 'fine\\_label': an 'int' classification label with the following mapping:\n\n\n'0': apple\n\n\n'1': aquarium\\_fish\n\n\n'2': baby\n\n\n'3': bear\n\n\n'4': beaver\n\n\n'5': bed\n\n\n'6': bee\n\n\n'7': beetle\n\n\n'8': bicycle\n\n\n'9': bottle\n\n\n'10': bowl\n\n\n'11': boy\n\n\n'12': bridge\n\n\n'13': bus\n\n\n'14': butterfly\n\n\n'15': camel\n\n\n'16': can\n\n\n'17': castle\n\n\n'18': caterpillar\n\n\n'19': cattle\n\n\n'20': chair\n\n\n'21': chimpanzee\n\n\n'22': clock\n\n\n'23': cloud\n\n\n'24': cockroach\n\n\n'25': couch\n\n\n'26': cra\n\n\n'27': crocodile\n\n\n'28': cup\n\n\n'29': dinosaur\n\n\n'30': dolphin\n\n\n'31': elephant\n\n\n'32': flatfish\n\n\n'33': forest\n\n\n'34': fox\n\n\n'35': girl\n\n\n'36': hamster\n\n\n'37': house\n\n\n'38': kangaroo\n\n\n'39': keyboard\n\n\n'40': lamp\n\n\n'41': lawn\\_mower\n\n\n'42': leopard\n\n\n'43': lion\n\n\n'44': lizard\n\n\n'45': lobster\n\n\n'46': man\n\n\n'47': maple\\_tree\n\n\n'48': motorcycle\n\n\n'49': mountain\n\n\n'50': mouse\n\n\n'51': mushroom\n\n\n'52': oak\\_tree\n\n\n'53': orange\n\n\n'54': orchid\n\n\n'55': otter\n\n\n'56': palm\\_tree\n\n\n'57': pear\n\n\n'58': pickup\\_truck\n\n\n'59': pine\\_tree\n\n\n'60': plain\n\n\n'61': plate\n\n\n'62': poppy\n\n\n'63': porcupine\n\n\n'64': possum\n\n\n'65': rabbit\n\n\n'66': raccoon\n\n\n'67': ray\n\n\n'68': road\n\n\n'69': rocket\n\n\n'70': rose\n\n\n'71': sea\n\n\n'72': seal\n\n\n'73': shark\n\n\n'74': shrew\n\n\n'75': skunk\n\n\n'76': skyscraper\n\n\n'77': snail\n\n\n'78': snake\n\n\n'79': spider\n\n\n'80': squirrel\n\n\n'81': streetcar\n\n\n'82': sunflower\n\n\n'83': sweet\\_pepper\n\n\n'84': table\n\n\n'85': tank\n\n\n'86': telephone\n\n\n'87': television\n\n\n'88': tiger\n\n\n'89': tractor\n\n\n'90': train\n\n\n'91': trout\n\n\n'92': tulip\n\n\n'93': turtle\n\n\n'94': wardrobe\n\n\n'95': whale\n\n\n'96': willow\\_tree\n\n\n'97': wolf\n\n\n'98': woman\n\n\n'99': worm\n* 'coarse\\_label': an 'int' coarse classification label with following mapping:\n\n\n'0': aquatic\\_mammals\n\n\n'1': fish\n\n\n'2': flowers\n\n\n'3': food\\_containers\n\n\n'4': fruit\\_and\\_vegetables\n\n\n'5': household\\_electrical\\_devices\n\n\n'6': household\\_furniture\n\n\n'7': insects\n\n\n'8': large\\_carnivores\n\n\n'9': large\\_man-made\\_outdoor\\_things\n\n\n'10': large\\_natural\\_outdoor\\_scenes\n\n\n'11': large\\_omnivores\\_and\\_herbivores\n\n\n'12': medium\\_mammals\n\n\n'13': non-insect\\_invertebrates\n\n\n'14': people\n\n\n'15': reptiles\n\n\n'16': small\\_mammals\n\n\n'17': trees\n\n\n'18': vehicles\\_1\n\n\n'19': vehicles\\_2", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @gchhablani for adding this dataset." ]
[ 91, 89, 43, 12, 16, 993, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 18 ]
[ "passage: TAGS\n#task_categories-image-classification #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-80-Million-Tiny-Images #language-English #license-unknown #region-us \n### Dataset Summary\n\n\nThe CIFAR-100 dataset consists of 60000 32x32 colour images in 100 classes, with 600 images\nper class. There are 500 training images and 100 testing images per class. There are 50000 training images and 10000 test images. The 100 classes are grouped into 20 superclasses.\nThere are two labels per image - fine label (actual class) and coarse label (superclass).### Supported Tasks and Leaderboards\n\n\n* 'image-classification': The goal of this task is to classify a given image into one of 100 classes. The leaderboard is available here.### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nA sample from the training set is provided below:" ]
faa1b5a78dd926a899bcd4da289c2e3abe8061a9
# Dataset Card for CIRCA ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [CIRCA homepage](https://github.com/google-research-datasets/circa) - **Repository:** [CIRCA repository](https://github.com/google-research-datasets/circa) - **Paper:** ["I’d rather just go to bed”: Understanding Indirect Answers](https://arxiv.org/abs/2010.03450) - **Point of Contact:** [Circa team, Google](circa@google.com) ### Dataset Summary The Circa (meaning ‘approximately’) dataset aims to help machine learning systems to solve the problem of interpreting indirect answers to polar questions. The dataset contains pairs of yes/no questions and indirect answers, together with annotations for the interpretation of the answer. The data is collected in 10 different social conversational situations (eg. food preferences of a friend). The following are the situational contexts for the dialogs in the data. ``` 1. X wants to know about Y’s food preferences 2. X wants to know what activities Y likes to do during weekends. 3. X wants to know what sorts of books Y likes to read. 4. Y has just moved into a neighbourhood and meets his/her new neighbour X. 5. X and Y are colleagues who are leaving work on a Friday at the same time. 6. X wants to know about Y's music preferences. 7. Y has just travelled from a different city to meet X. 8. X and Y are childhood neighbours who unexpectedly run into each other at a cafe. 9. Y has just told X that he/she is thinking of buying a flat in New York. 10. Y has just told X that he/she is considering switching his/her job. ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The text in the dataset is in English. ## Dataset Structure ### Data Instances The columns indicate: ``` 1. id : unique id for the question-answer pair 2. context : the social situation for the dialogue. One of 10 situations (see next section). Each situation is a dialogue between a person who poses the question (X) and the person who answers (Y). 3. question-X : the question posed by X 4. canquestion-X : a (automatically) rewritten version of question into declarative form Eg. Do you like Italian? --> I like Italian. See the paper for details. 5. answer-Y : the answer given by Y to X 6. judgements : the interpretations for the QA pair from 5 annotators. The value is a list of 5 strings, separated by the token ‘#’ 7. goldstandard1 : a gold standard majority judgement from the annotators. The value is the most common interpretation and picked by at least 3 (out of 5 annotators). When a majority judgement was not reached by the above criteria, the value is ‘NA’ 8. goldstandard2 : Here the labels ‘Probably yes / sometimes yes’, ‘Probably no', and 'I am not sure how X will interpret Y’s answer' are mapped respectively to ‘Yes’, ‘No’, and 'In the middle, neither yes nor no’ before computing the majority. Still the label must be given at least 3 times to become the majority choice. This method represents a less strict way of analyzing the interpretations. ``` ### Data Fields ``` id : 1 context : X wants to know about Y's food preferences. question-X : Are you vegan? canquestion-X : I am vegan. answer-Y : I love burgers too much. judgements : no#no#no#no#no goldstandard1 : no (label(s) used for the classification task) goldstandard2 : no (label(s) used for the classification task) ``` ### Data Splits There are no explicit train/val/test splits in this dataset. ## Dataset Creation ### Curation Rationale They revisited a pragmatic inference problem in dialog: Understanding indirect responses to questions. Humans can interpret ‘I’m starving.’ in response to ‘Hungry?’, even without direct cue words such as ‘yes’ and ‘no’. In dialog systems, allowing natural responses rather than closed vocabularies would be similarly beneficial. However, today’s systems are only as sensitive to these pragmatic moves as their language model allows. They create and release the first large-scale English language corpus ‘Circa’ with 34,268 (polar question, indirect answer) pairs to enable progress on this task. ### Source Data #### Initial Data Collection and Normalization The QA pairs and judgements were collected using crowd annotations in three phases. They recruited English native speakers. The full descriptions of the data collection and quality control are present in [EMNLP 2020 paper](https://arxiv.org/pdf/2010.03450.pdf). Below is a brief overview only. Phase 1: In the first phase, they collected questions only. They designed 10 imaginary social situations which give the annotator a context for the conversation. Examples are: ``` ‘asking a friend for food preferences’ ‘meeting your childhood neighbour’ ‘your friend wants to buy a flat in New York’ ``` Annotators were asked to suggest questions which could be asked in each situation, such that each question only requires a ‘yes’ or ‘no’ answer. 100 annotators produced 5 questions each for the 10 situations, resulting in 5000 questions. Phase 2: Here they focused on eliciting answers to the questions. They sampled 3500 questions from our previous set. For each question, They collected possible answers from 10 different annotators. The annotators were instructed to provide a natural phrase or a sentence as the answer and to avoid the use of explicit ‘yes’ and ‘no’ words. Phase 3: Finally the QA pairs (34,268) were given to a third set of annotators who were asked how the question seeker would likely interpret a particular answer. These annotators had the following options to choose from: ``` * 'Yes' * 'Probably yes' / 'sometimes yes' * 'Yes, subject to some conditions' * 'No' * 'Probably no' * 'In the middle, neither yes nor no' * 'I am not sure how X will interpret Y's answer' ``` #### Who are the source language producers? The rest of the data apart from 10 initial questions was collected using crowd workers. They ran pilots for each step of data collection, and perused their results manually to ensure clarity in guidelines, and quality of the data. They also recruited native English speakers, mostly from the USA, and a few from the UK and Canada. They did not collect any further information about the crowd workers. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? The rest of the data apart from 10 initial questions was collected using crowd workers. They ran pilots for each step of data collection, and perused their results manually to ensure clarity in guidelines, and quality of the data. They also recruited native English speakers, mostly from the USA, and a few from the UK and Canada. They did not collect any further information about the crowd workers. ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset is the work of Annie Louis, Dan Roth, and Filip Radlinski from Google LLC. ### Licensing Information This dataset was made available under the Creative Commons Attribution 4.0 License. A full copy of the license can be found at https://creativecommons.org/licenses/by-sa/4.0/e and link to the license webpage if available. ### Citation Information ``` @InProceedings{louis_emnlp2020, author = "Annie Louis and Dan Roth and Filip Radlinski", title = ""{I}'d rather just go to bed": {U}nderstanding {I}ndirect {A}nswers", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing", year = "2020", } ``` ### Contributions Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding this dataset.
circa
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-4.0", "question-answer-pair-classification", "arxiv:2010.03450", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "paperswithcode_id": "circa", "pretty_name": "CIRCA", "tags": ["question-answer-pair-classification"], "dataset_info": {"features": [{"name": "context", "dtype": "string"}, {"name": "question-X", "dtype": "string"}, {"name": "canquestion-X", "dtype": "string"}, {"name": "answer-Y", "dtype": "string"}, {"name": "judgements", "dtype": "string"}, {"name": "goldstandard1", "dtype": {"class_label": {"names": {"0": "Yes", "1": "No", "2": "In the middle, neither yes nor no", "3": "Probably yes / sometimes yes", "4": "Probably no", "5": "Yes, subject to some conditions", "6": "Other", "7": "I am not sure how X will interpret Y\u2019s answer"}}}}, {"name": "goldstandard2", "dtype": {"class_label": {"names": {"0": "Yes", "1": "No", "2": "In the middle, neither yes nor no", "3": "Yes, subject to some conditions", "4": "Other"}}}}], "splits": [{"name": "train", "num_bytes": 8149409, "num_examples": 34268}], "download_size": 2278280, "dataset_size": 8149409}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-18T14:21:12+00:00
[ "2010.03450" ]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #question-answer-pair-classification #arxiv-2010.03450 #region-us
# Dataset Card for CIRCA ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: CIRCA homepage - Repository: CIRCA repository - Paper: "I’d rather just go to bed”: Understanding Indirect Answers - Point of Contact: Circa team, Google ### Dataset Summary The Circa (meaning ‘approximately’) dataset aims to help machine learning systems to solve the problem of interpreting indirect answers to polar questions. The dataset contains pairs of yes/no questions and indirect answers, together with annotations for the interpretation of the answer. The data is collected in 10 different social conversational situations (eg. food preferences of a friend). The following are the situational contexts for the dialogs in the data. ### Supported Tasks and Leaderboards ### Languages The text in the dataset is in English. ## Dataset Structure ### Data Instances The columns indicate: ### Data Fields ### Data Splits There are no explicit train/val/test splits in this dataset. ## Dataset Creation ### Curation Rationale They revisited a pragmatic inference problem in dialog: Understanding indirect responses to questions. Humans can interpret ‘I’m starving.’ in response to ‘Hungry?’, even without direct cue words such as ‘yes’ and ‘no’. In dialog systems, allowing natural responses rather than closed vocabularies would be similarly beneficial. However, today’s systems are only as sensitive to these pragmatic moves as their language model allows. They create and release the first large-scale English language corpus ‘Circa’ with 34,268 (polar question, indirect answer) pairs to enable progress on this task. ### Source Data #### Initial Data Collection and Normalization The QA pairs and judgements were collected using crowd annotations in three phases. They recruited English native speakers. The full descriptions of the data collection and quality control are present in EMNLP 2020 paper. Below is a brief overview only. Phase 1: In the first phase, they collected questions only. They designed 10 imaginary social situations which give the annotator a context for the conversation. Examples are: Annotators were asked to suggest questions which could be asked in each situation, such that each question only requires a ‘yes’ or ‘no’ answer. 100 annotators produced 5 questions each for the 10 situations, resulting in 5000 questions. Phase 2: Here they focused on eliciting answers to the questions. They sampled 3500 questions from our previous set. For each question, They collected possible answers from 10 different annotators. The annotators were instructed to provide a natural phrase or a sentence as the answer and to avoid the use of explicit ‘yes’ and ‘no’ words. Phase 3: Finally the QA pairs (34,268) were given to a third set of annotators who were asked how the question seeker would likely interpret a particular answer. These annotators had the following options to choose from: #### Who are the source language producers? The rest of the data apart from 10 initial questions was collected using crowd workers. They ran pilots for each step of data collection, and perused their results manually to ensure clarity in guidelines, and quality of the data. They also recruited native English speakers, mostly from the USA, and a few from the UK and Canada. They did not collect any further information about the crowd workers. ### Annotations #### Annotation process #### Who are the annotators? The rest of the data apart from 10 initial questions was collected using crowd workers. They ran pilots for each step of data collection, and perused their results manually to ensure clarity in guidelines, and quality of the data. They also recruited native English speakers, mostly from the USA, and a few from the UK and Canada. They did not collect any further information about the crowd workers. ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators This dataset is the work of Annie Louis, Dan Roth, and Filip Radlinski from Google LLC. ### Licensing Information This dataset was made available under the Creative Commons Attribution 4.0 License. A full copy of the license can be found at URL and link to the license webpage if available. ### Contributions Thanks to @bhavitvyamalik for adding this dataset.
[ "# Dataset Card for CIRCA", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: CIRCA homepage\n- Repository: CIRCA repository\n- Paper: \"I’d rather just go to bed”: Understanding Indirect Answers\n- Point of Contact: Circa team, Google", "### Dataset Summary\n\nThe Circa (meaning ‘approximately’) dataset aims to help machine learning systems to solve the problem of interpreting indirect answers to polar questions.\n\nThe dataset contains pairs of yes/no questions and indirect answers, together with annotations for the interpretation of the answer. The data is collected in 10 different social conversational situations (eg. food preferences of a friend).\n\nThe following are the situational contexts for the dialogs in the data.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe text in the dataset is in English.", "## Dataset Structure", "### Data Instances\n\nThe columns indicate:", "### Data Fields", "### Data Splits\n\nThere are no explicit train/val/test splits in this dataset.", "## Dataset Creation", "### Curation Rationale\n\nThey revisited a pragmatic inference problem in dialog: Understanding indirect responses to questions. Humans can interpret ‘I’m starving.’ in response to ‘Hungry?’, even without direct cue words such as ‘yes’ and ‘no’. In dialog systems, allowing natural responses rather than closed vocabularies would be similarly beneficial. However, today’s systems are only as sensitive to these pragmatic moves as their language model allows. They create and release the first large-scale English language corpus ‘Circa’ with 34,268 (polar question, indirect answer) pairs to enable progress on this task.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe QA pairs and judgements were collected using crowd annotations in three phases. They recruited English native speakers. The full descriptions of the data collection and quality control are present in EMNLP 2020 paper. Below is a brief overview only.\n\nPhase 1: In the first phase, they collected questions only. They designed 10 imaginary social situations which give the annotator a context for the conversation. Examples are:\n\nAnnotators were asked to suggest questions which could be asked in each situation, such that each question only requires a ‘yes’ or ‘no’ answer. 100 annotators produced 5 questions each for the 10 situations, resulting in 5000 questions.\n\nPhase 2: Here they focused on eliciting answers to the questions. They sampled 3500 questions from our previous set. For each question, They collected possible answers from 10 different annotators. The annotators were instructed to provide a natural phrase or a sentence as the answer and to avoid the use of explicit ‘yes’ and ‘no’ words.\n\nPhase 3: Finally the QA pairs (34,268) were given to a third set of annotators who were asked how the question seeker would likely interpret a particular answer. These annotators had the following options to choose from:", "#### Who are the source language producers?\n\nThe rest of the data apart from 10 initial questions was collected using crowd workers. They ran pilots for each step of data collection, and perused their results manually to ensure clarity in guidelines, and quality of the data. They also recruited native English speakers, mostly from the USA, and a few from the UK and Canada. They did not collect any further information about the crowd workers.", "### Annotations", "#### Annotation process", "#### Who are the annotators?\n\nThe rest of the data apart from 10 initial questions was collected using crowd workers. They ran pilots for each step of data collection, and perused their results manually to ensure clarity in guidelines, and quality of the data. They also recruited native English speakers, mostly from the USA, and a few from the UK and Canada. They did not collect any further information about the crowd workers.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThis dataset is the work of Annie Louis, Dan Roth, and Filip Radlinski from Google LLC.", "### Licensing Information\n\nThis dataset was made available under the Creative Commons Attribution 4.0 License. A full copy of the license can be found at URL and link to the license webpage if available.", "### Contributions\n\nThanks to @bhavitvyamalik for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #question-answer-pair-classification #arxiv-2010.03450 #region-us \n", "# Dataset Card for CIRCA", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: CIRCA homepage\n- Repository: CIRCA repository\n- Paper: \"I’d rather just go to bed”: Understanding Indirect Answers\n- Point of Contact: Circa team, Google", "### Dataset Summary\n\nThe Circa (meaning ‘approximately’) dataset aims to help machine learning systems to solve the problem of interpreting indirect answers to polar questions.\n\nThe dataset contains pairs of yes/no questions and indirect answers, together with annotations for the interpretation of the answer. The data is collected in 10 different social conversational situations (eg. food preferences of a friend).\n\nThe following are the situational contexts for the dialogs in the data.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe text in the dataset is in English.", "## Dataset Structure", "### Data Instances\n\nThe columns indicate:", "### Data Fields", "### Data Splits\n\nThere are no explicit train/val/test splits in this dataset.", "## Dataset Creation", "### Curation Rationale\n\nThey revisited a pragmatic inference problem in dialog: Understanding indirect responses to questions. Humans can interpret ‘I’m starving.’ in response to ‘Hungry?’, even without direct cue words such as ‘yes’ and ‘no’. In dialog systems, allowing natural responses rather than closed vocabularies would be similarly beneficial. However, today’s systems are only as sensitive to these pragmatic moves as their language model allows. They create and release the first large-scale English language corpus ‘Circa’ with 34,268 (polar question, indirect answer) pairs to enable progress on this task.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe QA pairs and judgements were collected using crowd annotations in three phases. They recruited English native speakers. The full descriptions of the data collection and quality control are present in EMNLP 2020 paper. Below is a brief overview only.\n\nPhase 1: In the first phase, they collected questions only. They designed 10 imaginary social situations which give the annotator a context for the conversation. Examples are:\n\nAnnotators were asked to suggest questions which could be asked in each situation, such that each question only requires a ‘yes’ or ‘no’ answer. 100 annotators produced 5 questions each for the 10 situations, resulting in 5000 questions.\n\nPhase 2: Here they focused on eliciting answers to the questions. They sampled 3500 questions from our previous set. For each question, They collected possible answers from 10 different annotators. The annotators were instructed to provide a natural phrase or a sentence as the answer and to avoid the use of explicit ‘yes’ and ‘no’ words.\n\nPhase 3: Finally the QA pairs (34,268) were given to a third set of annotators who were asked how the question seeker would likely interpret a particular answer. These annotators had the following options to choose from:", "#### Who are the source language producers?\n\nThe rest of the data apart from 10 initial questions was collected using crowd workers. They ran pilots for each step of data collection, and perused their results manually to ensure clarity in guidelines, and quality of the data. They also recruited native English speakers, mostly from the USA, and a few from the UK and Canada. They did not collect any further information about the crowd workers.", "### Annotations", "#### Annotation process", "#### Who are the annotators?\n\nThe rest of the data apart from 10 initial questions was collected using crowd workers. They ran pilots for each step of data collection, and perused their results manually to ensure clarity in guidelines, and quality of the data. They also recruited native English speakers, mostly from the USA, and a few from the UK and Canada. They did not collect any further information about the crowd workers.", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThis dataset is the work of Annie Louis, Dan Roth, and Filip Radlinski from Google LLC.", "### Licensing Information\n\nThis dataset was made available under the Creative Commons Attribution 4.0 License. A full copy of the license can be found at URL and link to the license webpage if available.", "### Contributions\n\nThanks to @bhavitvyamalik for adding this dataset." ]
[ 113, 7, 120, 51, 109, 10, 14, 6, 12, 5, 21, 5, 149, 4, 282, 94, 5, 5, 93, 8, 8, 7, 8, 7, 5, 28, 41, 19 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #question-answer-pair-classification #arxiv-2010.03450 #region-us \n# Dataset Card for CIRCA## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: CIRCA homepage\n- Repository: CIRCA repository\n- Paper: \"I’d rather just go to bed”: Understanding Indirect Answers\n- Point of Contact: Circa team, Google### Dataset Summary\n\nThe Circa (meaning ‘approximately’) dataset aims to help machine learning systems to solve the problem of interpreting indirect answers to polar questions.\n\nThe dataset contains pairs of yes/no questions and indirect answers, together with annotations for the interpretation of the answer. The data is collected in 10 different social conversational situations (eg. food preferences of a friend).\n\nThe following are the situational contexts for the dialogs in the data.### Supported Tasks and Leaderboards### Languages\n\nThe text in the dataset is in English.## Dataset Structure### Data Instances\n\nThe columns indicate:### Data Fields### Data Splits\n\nThere are no explicit train/val/test splits in this dataset.## Dataset Creation", "passage: ### Curation Rationale\n\nThey revisited a pragmatic inference problem in dialog: Understanding indirect responses to questions. Humans can interpret ‘I’m starving.’ in response to ‘Hungry?’, even without direct cue words such as ‘yes’ and ‘no’. In dialog systems, allowing natural responses rather than closed vocabularies would be similarly beneficial. However, today’s systems are only as sensitive to these pragmatic moves as their language model allows. They create and release the first large-scale English language corpus ‘Circa’ with 34,268 (polar question, indirect answer) pairs to enable progress on this task.### Source Data#### Initial Data Collection and Normalization\n\nThe QA pairs and judgements were collected using crowd annotations in three phases. They recruited English native speakers. The full descriptions of the data collection and quality control are present in EMNLP 2020 paper. Below is a brief overview only.\n\nPhase 1: In the first phase, they collected questions only. They designed 10 imaginary social situations which give the annotator a context for the conversation. Examples are:\n\nAnnotators were asked to suggest questions which could be asked in each situation, such that each question only requires a ‘yes’ or ‘no’ answer. 100 annotators produced 5 questions each for the 10 situations, resulting in 5000 questions.\n\nPhase 2: Here they focused on eliciting answers to the questions. They sampled 3500 questions from our previous set. For each question, They collected possible answers from 10 different annotators. The annotators were instructed to provide a natural phrase or a sentence as the answer and to avoid the use of explicit ‘yes’ and ‘no’ words.\n\nPhase 3: Finally the QA pairs (34,268) were given to a third set of annotators who were asked how the question seeker would likely interpret a particular answer. These annotators had the following options to choose from:#### Who are the source language producers?\n\nThe rest of the data apart from 10 initial questions was collected using crowd workers. They ran pilots for each step of data collection, and perused their results manually to ensure clarity in guidelines, and quality of the data. They also recruited native English speakers, mostly from the USA, and a few from the UK and Canada. They did not collect any further information about the crowd workers.### Annotations#### Annotation process#### Who are the annotators?\n\nThe rest of the data apart from 10 initial questions was collected using crowd workers. They ran pilots for each step of data collection, and perused their results manually to ensure clarity in guidelines, and quality of the data. They also recruited native English speakers, mostly from the USA, and a few from the UK and Canada. They did not collect any further information about the crowd workers.### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset" ]
f2970eb3a55777454c94069077cc8d9b5866312d
# Dataset Card for "civil_comments" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data) - **Repository:** https://github.com/conversationai/unintended-ml-bias-analysis - **Paper:** https://arxiv.org/abs/1903.04561 - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 414.95 MB - **Size of the generated dataset:** 661.23 MB - **Total amount of disk used:** 1.08 GB ### Dataset Summary The comments in this dataset come from an archive of the Civil Comments platform, a commenting plugin for independent news sites. These public comments were created from 2015 - 2017 and appeared on approximately 50 English-language news sites across the world. When Civil Comments shut down in 2017, they chose to make the public comments available in a lasting open archive to enable future research. The original data, published on figshare, includes the public comment text, some associated metadata such as article IDs, timestamps and commenter-generated "civility" labels, but does not include user ids. Jigsaw extended this dataset by adding additional labels for toxicity and identity mentions. This data set is an exact replica of the data released for the Jigsaw Unintended Bias in Toxicity Classification Kaggle challenge. This dataset is released under CC0, as is the underlying comment text. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 414.95 MB - **Size of the generated dataset:** 661.23 MB - **Total amount of disk used:** 1.08 GB An example of 'validation' looks as follows. ``` { "identity_attack": 0.0, "insult": 0.0, "obscene": 0.0, "severe_toxicity": 0.0, "sexual_explicit": 0.0, "text": "The public test.", "threat": 0.0, "toxicity": 0.0 } ``` ### Data Fields The data fields are the same among all splits. #### default - `text`: a `string` feature. - `toxicity`: a `float32` feature. - `severe_toxicity`: a `float32` feature. - `obscene`: a `float32` feature. - `threat`: a `float32` feature. - `insult`: a `float32` feature. - `identity_attack`: a `float32` feature. - `sexual_explicit`: a `float32` feature. ### Data Splits | name | train |validation|test | |-------|------:|---------:|----:| |default|1804874| 97320|97320| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information This dataset is released under [CC0 1.0](https://creativecommons.org/publicdomain/zero/1.0/). ### Citation Information ``` @article{DBLP:journals/corr/abs-1903-04561, author = {Daniel Borkan and Lucas Dixon and Jeffrey Sorensen and Nithum Thain and Lucy Vasserman}, title = {Nuanced Metrics for Measuring Unintended Bias with Real Data for Text Classification}, journal = {CoRR}, volume = {abs/1903.04561}, year = {2019}, url = {http://arxiv.org/abs/1903.04561}, archivePrefix = {arXiv}, eprint = {1903.04561}, timestamp = {Sun, 31 Mar 2019 19:01:24 +0200}, biburl = {https://dblp.org/rec/bib/journals/corr/abs-1903-04561}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ### Contributions Thanks to [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
google/civil_comments
[ "task_categories:text-classification", "task_ids:multi-label-classification", "language:en", "license:cc0-1.0", "toxic-comment-classification", "arxiv:1903.04561", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["en"], "license": "cc0-1.0", "task_categories": ["text-classification"], "task_ids": ["multi-label-classification"], "paperswithcode_id": "civil-comments", "pretty_name": "Civil Comments", "tags": ["toxic-comment-classification"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "toxicity", "dtype": "float32"}, {"name": "severe_toxicity", "dtype": "float32"}, {"name": "obscene", "dtype": "float32"}, {"name": "threat", "dtype": "float32"}, {"name": "insult", "dtype": "float32"}, {"name": "identity_attack", "dtype": "float32"}, {"name": "sexual_explicit", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 594805164, "num_examples": 1804874}, {"name": "validation", "num_bytes": 32216880, "num_examples": 97320}, {"name": "test", "num_bytes": 31963524, "num_examples": 97320}], "download_size": 422061071, "dataset_size": 658985568}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}]}
2024-01-25T08:23:15+00:00
[ "1903.04561" ]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-multi-label-classification #language-English #license-cc0-1.0 #toxic-comment-classification #arxiv-1903.04561 #region-us
Dataset Card for "civil\_comments" ================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Point of Contact: * Size of downloaded dataset files: 414.95 MB * Size of the generated dataset: 661.23 MB * Total amount of disk used: 1.08 GB ### Dataset Summary The comments in this dataset come from an archive of the Civil Comments platform, a commenting plugin for independent news sites. These public comments were created from 2015 - 2017 and appeared on approximately 50 English-language news sites across the world. When Civil Comments shut down in 2017, they chose to make the public comments available in a lasting open archive to enable future research. The original data, published on figshare, includes the public comment text, some associated metadata such as article IDs, timestamps and commenter-generated "civility" labels, but does not include user ids. Jigsaw extended this dataset by adding additional labels for toxicity and identity mentions. This data set is an exact replica of the data released for the Jigsaw Unintended Bias in Toxicity Classification Kaggle challenge. This dataset is released under CC0, as is the underlying comment text. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### default * Size of downloaded dataset files: 414.95 MB * Size of the generated dataset: 661.23 MB * Total amount of disk used: 1.08 GB An example of 'validation' looks as follows. ### Data Fields The data fields are the same among all splits. #### default * 'text': a 'string' feature. * 'toxicity': a 'float32' feature. * 'severe\_toxicity': a 'float32' feature. * 'obscene': a 'float32' feature. * 'threat': a 'float32' feature. * 'insult': a 'float32' feature. * 'identity\_attack': a 'float32' feature. * 'sexual\_explicit': a 'float32' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information This dataset is released under CC0 1.0. ### Contributions Thanks to @lewtun, @patrickvonplaten, @thomwolf for adding this dataset.
[ "### Dataset Summary\n\n\nThe comments in this dataset come from an archive of the Civil Comments\nplatform, a commenting plugin for independent news sites. These public comments\nwere created from 2015 - 2017 and appeared on approximately 50 English-language\nnews sites across the world. When Civil Comments shut down in 2017, they chose\nto make the public comments available in a lasting open archive to enable future\nresearch. The original data, published on figshare, includes the public comment\ntext, some associated metadata such as article IDs, timestamps and\ncommenter-generated \"civility\" labels, but does not include user ids. Jigsaw\nextended this dataset by adding additional labels for toxicity and identity\nmentions. This data set is an exact replica of the data released for the\nJigsaw Unintended Bias in Toxicity Classification Kaggle challenge. This\ndataset is released under CC0, as is the underlying comment text.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 414.95 MB\n* Size of the generated dataset: 661.23 MB\n* Total amount of disk used: 1.08 GB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'text': a 'string' feature.\n* 'toxicity': a 'float32' feature.\n* 'severe\\_toxicity': a 'float32' feature.\n* 'obscene': a 'float32' feature.\n* 'threat': a 'float32' feature.\n* 'insult': a 'float32' feature.\n* 'identity\\_attack': a 'float32' feature.\n* 'sexual\\_explicit': a 'float32' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThis dataset is released under CC0 1.0.", "### Contributions\n\n\nThanks to @lewtun, @patrickvonplaten, @thomwolf for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-label-classification #language-English #license-cc0-1.0 #toxic-comment-classification #arxiv-1903.04561 #region-us \n", "### Dataset Summary\n\n\nThe comments in this dataset come from an archive of the Civil Comments\nplatform, a commenting plugin for independent news sites. These public comments\nwere created from 2015 - 2017 and appeared on approximately 50 English-language\nnews sites across the world. When Civil Comments shut down in 2017, they chose\nto make the public comments available in a lasting open archive to enable future\nresearch. The original data, published on figshare, includes the public comment\ntext, some associated metadata such as article IDs, timestamps and\ncommenter-generated \"civility\" labels, but does not include user ids. Jigsaw\nextended this dataset by adding additional labels for toxicity and identity\nmentions. This data set is an exact replica of the data released for the\nJigsaw Unintended Bias in Toxicity Classification Kaggle challenge. This\ndataset is released under CC0, as is the underlying comment text.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 414.95 MB\n* Size of the generated dataset: 661.23 MB\n* Total amount of disk used: 1.08 GB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'text': a 'string' feature.\n* 'toxicity': a 'float32' feature.\n* 'severe\\_toxicity': a 'float32' feature.\n* 'obscene': a 'float32' feature.\n* 'threat': a 'float32' feature.\n* 'insult': a 'float32' feature.\n* 'identity\\_attack': a 'float32' feature.\n* 'sexual\\_explicit': a 'float32' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThis dataset is released under CC0 1.0.", "### Contributions\n\n\nThanks to @lewtun, @patrickvonplaten, @thomwolf for adding this dataset." ]
[ 57, 207, 10, 11, 6, 54, 17, 126, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 16, 28 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-multi-label-classification #language-English #license-cc0-1.0 #toxic-comment-classification #arxiv-1903.04561 #region-us \n### Dataset Summary\n\n\nThe comments in this dataset come from an archive of the Civil Comments\nplatform, a commenting plugin for independent news sites. These public comments\nwere created from 2015 - 2017 and appeared on approximately 50 English-language\nnews sites across the world. When Civil Comments shut down in 2017, they chose\nto make the public comments available in a lasting open archive to enable future\nresearch. The original data, published on figshare, includes the public comment\ntext, some associated metadata such as article IDs, timestamps and\ncommenter-generated \"civility\" labels, but does not include user ids. Jigsaw\nextended this dataset by adding additional labels for toxicity and identity\nmentions. This data set is an exact replica of the data released for the\nJigsaw Unintended Bias in Toxicity Classification Kaggle challenge. This\ndataset is released under CC0, as is the underlying comment text.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### default\n\n\n* Size of downloaded dataset files: 414.95 MB\n* Size of the generated dataset: 661.23 MB\n* Total amount of disk used: 1.08 GB\n\n\nAn example of 'validation' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### default\n\n\n* 'text': a 'string' feature.\n* 'toxicity': a 'float32' feature.\n* 'severe\\_toxicity': a 'float32' feature.\n* 'obscene': a 'float32' feature.\n* 'threat': a 'float32' feature.\n* 'insult': a 'float32' feature.\n* 'identity\\_attack': a 'float32' feature.\n* 'sexual\\_explicit': a 'float32' feature.### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale" ]
116216daae4af666df84c0c3296c92d2ff9bcb29
# Dataset Card for Clickbait/Fake News in Bulgarian ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Data Science Society / Case Fake News](https://gitlab.com/datasciencesociety/case_fake_news) - **Repository:** [Data Science Society / Case Fake News / Data](https://gitlab.com/datasciencesociety/case_fake_news/-/tree/master/data) - **Paper:** [This paper uses the dataset.](https://www.acl-bg.org/proceedings/2017/RANLP%202017/pdf/RANLP045.pdf) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This is a corpus of Bulgarian news over a fixed period of time, whose factuality had been questioned. The news come from 377 different sources from various domains, including politics, interesting facts and tips&tricks. The dataset was prepared for the Hack the Fake News hackathon. It was provided by the [Bulgarian Association of PR Agencies](http://www.bapra.bg/) and is available in [Gitlab](https://gitlab.com/datasciencesociety/). The corpus was automatically collected, and then annotated by students of journalism. The training dataset contains 2,815 examples, where 1,940 (i.e., 69%) are fake news and 1,968 (i.e., 70%) are click-baits; There are 761 testing examples. There is 98% correlation between fake news and clickbaits. One important aspect about the training dataset is that it contains many repetitions. This should not be surprising as it attempts to represent a natural distribution of factual vs. fake news on-line over a period of time. As publishers of fake news often have a group of websites that feature the same deceiving content, we should expect some repetition. In particular, the training dataset contains 434 unique articles with duplicates. These articles have three reposts each on average, with the most reposted article appearing 45 times. If we take into account the labels of the reposted articles, we can see that if an article is reposted, it is more likely to be fake news. The number of fake news that have a duplicate in the training dataset are 1018 whereas, the number of articles with genuine content that have a duplicate article in the training set is 322. (The dataset description is from the following [paper](https://www.acl-bg.org/proceedings/2017/RANLP%202017/pdf/RANLP045.pdf).) ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Bulgarian ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields Each entry in the dataset consists of the following elements: * `fake_news_score` - a label indicating whether the article is fake or not * `click_bait_score` - another label indicating whether it is a click-bait * `content_title` - article heading * `content_url` - URL of the original article * `content_published_time` - date of publication * `content` - article content ### Data Splits The **training dataset** contains 2,815 examples, where 1,940 (i.e., 69%) are fake news and 1,968 (i.e., 70%) are click-baits; The **validation dataset** contains 761 testing examples. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@tsvm](https://github.com/tsvm), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
clickbait_news_bg
[ "task_categories:text-classification", "task_ids:fact-checking", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:bg", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["bg"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["fact-checking"], "pretty_name": "Clickbait/Fake News in Bulgarian", "dataset_info": {"features": [{"name": "fake_news_score", "dtype": {"class_label": {"names": {"0": "legitimate", "1": "fake"}}}}, {"name": "click_bait_score", "dtype": {"class_label": {"names": {"0": "normal", "1": "clickbait"}}}}, {"name": "content_title", "dtype": "string"}, {"name": "content_url", "dtype": "string"}, {"name": "content_published_time", "dtype": "string"}, {"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 24480386, "num_examples": 2815}, {"name": "validation", "num_bytes": 6752226, "num_examples": 761}], "download_size": 11831065, "dataset_size": 31232612}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}]}
2024-01-18T14:25:02+00:00
[]
[ "bg" ]
TAGS #task_categories-text-classification #task_ids-fact-checking #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Bulgarian #license-unknown #region-us
# Dataset Card for Clickbait/Fake News in Bulgarian ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: Data Science Society / Case Fake News - Repository: Data Science Society / Case Fake News / Data - Paper: This paper uses the dataset. - Leaderboard: - Point of Contact: ### Dataset Summary This is a corpus of Bulgarian news over a fixed period of time, whose factuality had been questioned. The news come from 377 different sources from various domains, including politics, interesting facts and tips&tricks. The dataset was prepared for the Hack the Fake News hackathon. It was provided by the Bulgarian Association of PR Agencies and is available in Gitlab. The corpus was automatically collected, and then annotated by students of journalism. The training dataset contains 2,815 examples, where 1,940 (i.e., 69%) are fake news and 1,968 (i.e., 70%) are click-baits; There are 761 testing examples. There is 98% correlation between fake news and clickbaits. One important aspect about the training dataset is that it contains many repetitions. This should not be surprising as it attempts to represent a natural distribution of factual vs. fake news on-line over a period of time. As publishers of fake news often have a group of websites that feature the same deceiving content, we should expect some repetition. In particular, the training dataset contains 434 unique articles with duplicates. These articles have three reposts each on average, with the most reposted article appearing 45 times. If we take into account the labels of the reposted articles, we can see that if an article is reposted, it is more likely to be fake news. The number of fake news that have a duplicate in the training dataset are 1018 whereas, the number of articles with genuine content that have a duplicate article in the training set is 322. (The dataset description is from the following paper.) ### Supported Tasks and Leaderboards ### Languages Bulgarian ## Dataset Structure ### Data Instances ### Data Fields Each entry in the dataset consists of the following elements: * 'fake_news_score' - a label indicating whether the article is fake or not * 'click_bait_score' - another label indicating whether it is a click-bait * 'content_title' - article heading * 'content_url' - URL of the original article * 'content_published_time' - date of publication * 'content' - article content ### Data Splits The training dataset contains 2,815 examples, where 1,940 (i.e., 69%) are fake news and 1,968 (i.e., 70%) are click-baits; The validation dataset contains 761 testing examples. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @tsvm, @lhoestq for adding this dataset.
[ "# Dataset Card for Clickbait/Fake News in Bulgarian", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Data Science Society / Case Fake News\n- Repository: Data Science Society / Case Fake News / Data\n- Paper: This paper uses the dataset.\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThis is a corpus of Bulgarian news over a fixed period of time, whose factuality had been questioned. \nThe news come from 377 different sources from various domains, including politics, interesting facts and tips&tricks.\n\nThe dataset was prepared for the Hack the\nFake News hackathon. It was provided by the\nBulgarian Association of PR Agencies and is\navailable in Gitlab. \n\nThe corpus was automatically collected, and then annotated by students of journalism. \n\nThe training dataset contains 2,815 examples, where 1,940 (i.e., 69%) are fake news\nand 1,968 (i.e., 70%) are click-baits; There are 761 testing examples. \n\nThere is 98% correlation between fake news and clickbaits.\n\nOne important aspect about the training dataset is that it contains many repetitions.\nThis should not be surprising as it attempts to represent a natural distribution of factual\nvs. fake news on-line over a period of time. As publishers of fake news often have a group of\nwebsites that feature the same deceiving content, we should expect some repetition.\nIn particular, the training dataset contains\n434 unique articles with duplicates. These articles have three reposts each on average, with\nthe most reposted article appearing 45 times.\nIf we take into account the labels of the reposted articles, we can see that if an article\nis reposted, it is more likely to be fake news.\nThe number of fake news that have a duplicate in the training dataset are 1018 whereas,\nthe number of articles with genuine content\nthat have a duplicate article in the training set is 322.\n\n(The dataset description is from the following paper.)", "### Supported Tasks and Leaderboards", "### Languages\n\nBulgarian", "## Dataset Structure", "### Data Instances", "### Data Fields\n\nEach entry in the dataset consists of the following elements: \n\n* 'fake_news_score' - a label indicating whether the article is fake or not\n\n* 'click_bait_score' - another label indicating whether it is a click-bait\n\n* 'content_title' - article heading\n\n* 'content_url' - URL of the original article\n\n* 'content_published_time' - date of publication\n\n* 'content' - article content", "### Data Splits\n\nThe training dataset contains 2,815 examples, where 1,940 (i.e., 69%) are fake news\nand 1,968 (i.e., 70%) are click-baits; \n\nThe validation dataset contains 761 testing examples.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @tsvm, @lhoestq for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-fact-checking #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Bulgarian #license-unknown #region-us \n", "# Dataset Card for Clickbait/Fake News in Bulgarian", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: Data Science Society / Case Fake News\n- Repository: Data Science Society / Case Fake News / Data\n- Paper: This paper uses the dataset.\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThis is a corpus of Bulgarian news over a fixed period of time, whose factuality had been questioned. \nThe news come from 377 different sources from various domains, including politics, interesting facts and tips&tricks.\n\nThe dataset was prepared for the Hack the\nFake News hackathon. It was provided by the\nBulgarian Association of PR Agencies and is\navailable in Gitlab. \n\nThe corpus was automatically collected, and then annotated by students of journalism. \n\nThe training dataset contains 2,815 examples, where 1,940 (i.e., 69%) are fake news\nand 1,968 (i.e., 70%) are click-baits; There are 761 testing examples. \n\nThere is 98% correlation between fake news and clickbaits.\n\nOne important aspect about the training dataset is that it contains many repetitions.\nThis should not be surprising as it attempts to represent a natural distribution of factual\nvs. fake news on-line over a period of time. As publishers of fake news often have a group of\nwebsites that feature the same deceiving content, we should expect some repetition.\nIn particular, the training dataset contains\n434 unique articles with duplicates. These articles have three reposts each on average, with\nthe most reposted article appearing 45 times.\nIf we take into account the labels of the reposted articles, we can see that if an article\nis reposted, it is more likely to be fake news.\nThe number of fake news that have a duplicate in the training dataset are 1018 whereas,\nthe number of articles with genuine content\nthat have a duplicate article in the training set is 322.\n\n(The dataset description is from the following paper.)", "### Supported Tasks and Leaderboards", "### Languages\n\nBulgarian", "## Dataset Structure", "### Data Instances", "### Data Fields\n\nEach entry in the dataset consists of the following elements: \n\n* 'fake_news_score' - a label indicating whether the article is fake or not\n\n* 'click_bait_score' - another label indicating whether it is a click-bait\n\n* 'content_title' - article heading\n\n* 'content_url' - URL of the original article\n\n* 'content_published_time' - date of publication\n\n* 'content' - article content", "### Data Splits\n\nThe training dataset contains 2,815 examples, where 1,940 (i.e., 69%) are fake news\nand 1,968 (i.e., 70%) are click-baits; \n\nThe validation dataset contains 761 testing examples.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @tsvm, @lhoestq for adding this dataset." ]
[ 92, 14, 120, 50, 375, 10, 6, 6, 6, 103, 60, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 22 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-fact-checking #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Bulgarian #license-unknown #region-us \n# Dataset Card for Clickbait/Fake News in Bulgarian## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: Data Science Society / Case Fake News\n- Repository: Data Science Society / Case Fake News / Data\n- Paper: This paper uses the dataset.\n- Leaderboard:\n- Point of Contact:" ]
ae61ccb9320a78109a246414139ff3a2bd677b8b
# Dataset Card for ClimateFever ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [CLIMATE-FEVER homepage](http://climatefever.ai) - **Repository:** [CLIMATE-FEVER repository](https://github.com/tdiggelm/climate-fever-dataset) - **Paper:** [CLIMATE-FEVER: A Dataset for Verification of Real-World Climate Claims](https://arxiv.org/abs/2012.00614) - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Thomas Diggelmann](mailto:thomasdi@student.ethz.ch) ### Dataset Summary A dataset adopting the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet. Each claim is accompanied by five manually annotated evidence sentences retrieved from the English Wikipedia that support, refute or do not give enough information to validate the claim totalling in 7,675 claim-evidence pairs. The dataset features challenging claims that relate multiple facets and disputed cases of claims where both supporting and refuting evidence are present. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages The text in the dataset is in English, as found in real-world claims about climate-change on the Internet. The associated BCP-47 code is `en`. ## Dataset Structure ### Data Instances ``` { "claim_id": "0", "claim": "Global warming is driving polar bears toward extinction", "claim_label": 0, # "SUPPORTS" "evidences": [ { "evidence_id": "Extinction risk from global warming:170", "evidence_label": 2, # "NOT_ENOUGH_INFO" "article": "Extinction risk from global warming", "evidence": "\"Recent Research Shows Human Activity Driving Earth Towards Global Extinction Event\".", "entropy": 0.6931471805599453, "votes": [ "SUPPORTS", "NOT_ENOUGH_INFO", null, null, null ] }, { "evidence_id": "Global warming:14", "evidence_label": 0, # "SUPPORTS" "article": "Global warming", "evidence": "Environmental impacts include the extinction or relocation of many species as their ecosystems change, most immediately the environments of coral reefs, mountains, and the Arctic.", "entropy": 0.0, "votes": [ "SUPPORTS", "SUPPORTS", null, null, null ] }, { "evidence_id": "Global warming:178", "evidence_label": 2, # "NOT_ENOUGH_INFO" "article": "Global warming", "evidence": "Rising temperatures push bees to their physiological limits, and could cause the extinction of bee populations.", "entropy": 0.6931471805599453, "votes": [ "SUPPORTS", "NOT_ENOUGH_INFO", null, null, null ] }, { "evidence_id": "Habitat destruction:61", "evidence_label": 0, # "SUPPORTS" "article": "Habitat destruction", "evidence": "Rising global temperatures, caused by the greenhouse effect, contribute to habitat destruction, endangering various species, such as the polar bear.", "entropy": 0.0, "votes": [ "SUPPORTS", "SUPPORTS", null, null, null ] }, { "evidence_id": "Polar bear:1328", "evidence_label": 2, # "NOT_ENOUGH_INFO" "article": "Polar bear", "evidence": "\"Bear hunting caught in global warming debate\".", "entropy": 0.6931471805599453, "votes": [ "SUPPORTS", "NOT_ENOUGH_INFO", null, null, null ] } ] } ``` ### Data Fields - `claim_id`: a `string` feature, unique claim identifier. - `claim`: a `string` feature, claim text. - `claim_label`: a `int` feature, overall label assigned to claim (based on evidence majority vote). The label correspond to 0: "supports", 1: "refutes", 2: "not enough info" and 3: "disputed". - `evidences`: a list of evidences with fields: - `evidence_id`: a `string` feature, unique evidence identifier. - `evidence_label`: a `int` feature, micro-verdict label. The label correspond to 0: "supports", 1: "refutes" and 2: "not enough info". - `article`: a `string` feature, title of source article (Wikipedia page). - `evidence`: a `string` feature, evidence sentence. - `entropy`: a `float32` feature, entropy reflecting uncertainty of `evidence_label`. - `votes`: a `list` of `string` features, corresponding to individual votes. ### Data Splits This benchmark dataset currently consists of a single data split `test` that consists of 1,535 claims or 7,675 claim-evidence pairs. ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information ```bibtex @misc{diggelmann2020climatefever, title={CLIMATE-FEVER: A Dataset for Verification of Real-World Climate Claims}, author={Thomas Diggelmann and Jordan Boyd-Graber and Jannis Bulian and Massimiliano Ciaramita and Markus Leippold}, year={2020}, eprint={2012.00614}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@tdiggelm](https://github.com/tdiggelm) for adding this dataset.
climate_fever
[ "task_categories:text-classification", "task_categories:text-retrieval", "task_ids:text-scoring", "task_ids:fact-checking", "task_ids:fact-checking-retrieval", "task_ids:semantic-similarity-scoring", "task_ids:multi-input-text-classification", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:extended|wikipedia", "source_datasets:original", "language:en", "license:unknown", "arxiv:2012.00614", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced", "expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["extended|wikipedia", "original"], "task_categories": ["text-classification", "text-retrieval"], "task_ids": ["text-scoring", "fact-checking", "fact-checking-retrieval", "semantic-similarity-scoring", "multi-input-text-classification"], "paperswithcode_id": "climate-fever", "pretty_name": "ClimateFever", "dataset_info": {"features": [{"name": "claim_id", "dtype": "string"}, {"name": "claim", "dtype": "string"}, {"name": "claim_label", "dtype": {"class_label": {"names": {"0": "SUPPORTS", "1": "REFUTES", "2": "NOT_ENOUGH_INFO", "3": "DISPUTED"}}}}, {"name": "evidences", "list": [{"name": "evidence_id", "dtype": "string"}, {"name": "evidence_label", "dtype": {"class_label": {"names": {"0": "SUPPORTS", "1": "REFUTES", "2": "NOT_ENOUGH_INFO"}}}}, {"name": "article", "dtype": "string"}, {"name": "evidence", "dtype": "string"}, {"name": "entropy", "dtype": "float32"}, {"name": "votes", "list": "string"}]}], "splits": [{"name": "test", "num_bytes": 2429240, "num_examples": 1535}], "download_size": 868947, "dataset_size": 2429240}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}]}
2024-01-18T14:28:07+00:00
[ "2012.00614" ]
[ "en" ]
TAGS #task_categories-text-classification #task_categories-text-retrieval #task_ids-text-scoring #task_ids-fact-checking #task_ids-fact-checking-retrieval #task_ids-semantic-similarity-scoring #task_ids-multi-input-text-classification #annotations_creators-crowdsourced #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended|wikipedia #source_datasets-original #language-English #license-unknown #arxiv-2012.00614 #region-us
# Dataset Card for ClimateFever ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: CLIMATE-FEVER homepage - Repository: CLIMATE-FEVER repository - Paper: CLIMATE-FEVER: A Dataset for Verification of Real-World Climate Claims - Leaderboard: - Point of Contact: Thomas Diggelmann ### Dataset Summary A dataset adopting the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet. Each claim is accompanied by five manually annotated evidence sentences retrieved from the English Wikipedia that support, refute or do not give enough information to validate the claim totalling in 7,675 claim-evidence pairs. The dataset features challenging claims that relate multiple facets and disputed cases of claims where both supporting and refuting evidence are present. ### Supported Tasks and Leaderboards ### Languages The text in the dataset is in English, as found in real-world claims about climate-change on the Internet. The associated BCP-47 code is 'en'. ## Dataset Structure ### Data Instances ### Data Fields - 'claim_id': a 'string' feature, unique claim identifier. - 'claim': a 'string' feature, claim text. - 'claim_label': a 'int' feature, overall label assigned to claim (based on evidence majority vote). The label correspond to 0: "supports", 1: "refutes", 2: "not enough info" and 3: "disputed". - 'evidences': a list of evidences with fields: - 'evidence_id': a 'string' feature, unique evidence identifier. - 'evidence_label': a 'int' feature, micro-verdict label. The label correspond to 0: "supports", 1: "refutes" and 2: "not enough info". - 'article': a 'string' feature, title of source article (Wikipedia page). - 'evidence': a 'string' feature, evidence sentence. - 'entropy': a 'float32' feature, entropy reflecting uncertainty of 'evidence_label'. - 'votes': a 'list' of 'string' features, corresponding to individual votes. ### Data Splits This benchmark dataset currently consists of a single data split 'test' that consists of 1,535 claims or 7,675 claim-evidence pairs. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @tdiggelm for adding this dataset.
[ "# Dataset Card for ClimateFever", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: CLIMATE-FEVER homepage\n- Repository: CLIMATE-FEVER repository\n- Paper: CLIMATE-FEVER: A Dataset for Verification of Real-World Climate Claims\n- Leaderboard: \n- Point of Contact: Thomas Diggelmann", "### Dataset Summary\n\nA dataset adopting the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet. Each claim is accompanied by five manually annotated evidence sentences retrieved from the English Wikipedia that support, refute or do not give enough information to validate the claim totalling in 7,675 claim-evidence pairs. The dataset features challenging claims that relate multiple facets and disputed cases of claims where both supporting and refuting evidence are present.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe text in the dataset is in English, as found in real-world claims about climate-change on the Internet. The associated BCP-47 code is 'en'.", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- 'claim_id': a 'string' feature, unique claim identifier.\n- 'claim': a 'string' feature, claim text.\n- 'claim_label': a 'int' feature, overall label assigned to claim (based on evidence majority vote). The label correspond to 0: \"supports\", 1: \"refutes\", 2: \"not enough info\" and 3: \"disputed\".\n- 'evidences': a list of evidences with fields:\n - 'evidence_id': a 'string' feature, unique evidence identifier.\n - 'evidence_label': a 'int' feature, micro-verdict label. The label correspond to 0: \"supports\", 1: \"refutes\" and 2: \"not enough info\".\n - 'article': a 'string' feature, title of source article (Wikipedia page).\n - 'evidence': a 'string' feature, evidence sentence.\n - 'entropy': a 'float32' feature, entropy reflecting uncertainty of 'evidence_label'.\n - 'votes': a 'list' of 'string' features, corresponding to individual votes.", "### Data Splits\n\nThis benchmark dataset currently consists of a single data split 'test' that consists of 1,535 claims or 7,675 claim-evidence pairs.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @tdiggelm for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_categories-text-retrieval #task_ids-text-scoring #task_ids-fact-checking #task_ids-fact-checking-retrieval #task_ids-semantic-similarity-scoring #task_ids-multi-input-text-classification #annotations_creators-crowdsourced #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended|wikipedia #source_datasets-original #language-English #license-unknown #arxiv-2012.00614 #region-us \n", "# Dataset Card for ClimateFever", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: CLIMATE-FEVER homepage\n- Repository: CLIMATE-FEVER repository\n- Paper: CLIMATE-FEVER: A Dataset for Verification of Real-World Climate Claims\n- Leaderboard: \n- Point of Contact: Thomas Diggelmann", "### Dataset Summary\n\nA dataset adopting the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet. Each claim is accompanied by five manually annotated evidence sentences retrieved from the English Wikipedia that support, refute or do not give enough information to validate the claim totalling in 7,675 claim-evidence pairs. The dataset features challenging claims that relate multiple facets and disputed cases of claims where both supporting and refuting evidence are present.", "### Supported Tasks and Leaderboards", "### Languages\n\nThe text in the dataset is in English, as found in real-world claims about climate-change on the Internet. The associated BCP-47 code is 'en'.", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- 'claim_id': a 'string' feature, unique claim identifier.\n- 'claim': a 'string' feature, claim text.\n- 'claim_label': a 'int' feature, overall label assigned to claim (based on evidence majority vote). The label correspond to 0: \"supports\", 1: \"refutes\", 2: \"not enough info\" and 3: \"disputed\".\n- 'evidences': a list of evidences with fields:\n - 'evidence_id': a 'string' feature, unique evidence identifier.\n - 'evidence_label': a 'int' feature, micro-verdict label. The label correspond to 0: \"supports\", 1: \"refutes\" and 2: \"not enough info\".\n - 'article': a 'string' feature, title of source article (Wikipedia page).\n - 'evidence': a 'string' feature, evidence sentence.\n - 'entropy': a 'float32' feature, entropy reflecting uncertainty of 'evidence_label'.\n - 'votes': a 'list' of 'string' features, corresponding to individual votes.", "### Data Splits\n\nThis benchmark dataset currently consists of a single data split 'test' that consists of 1,535 claims or 7,675 claim-evidence pairs.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @tdiggelm for adding this dataset." ]
[ 186, 9, 120, 67, 119, 10, 40, 6, 6, 264, 39, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 18 ]
[ "passage: TAGS\n#task_categories-text-classification #task_categories-text-retrieval #task_ids-text-scoring #task_ids-fact-checking #task_ids-fact-checking-retrieval #task_ids-semantic-similarity-scoring #task_ids-multi-input-text-classification #annotations_creators-crowdsourced #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended|wikipedia #source_datasets-original #language-English #license-unknown #arxiv-2012.00614 #region-us \n# Dataset Card for ClimateFever## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: CLIMATE-FEVER homepage\n- Repository: CLIMATE-FEVER repository\n- Paper: CLIMATE-FEVER: A Dataset for Verification of Real-World Climate Claims\n- Leaderboard: \n- Point of Contact: Thomas Diggelmann### Dataset Summary\n\nA dataset adopting the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet. Each claim is accompanied by five manually annotated evidence sentences retrieved from the English Wikipedia that support, refute or do not give enough information to validate the claim totalling in 7,675 claim-evidence pairs. The dataset features challenging claims that relate multiple facets and disputed cases of claims where both supporting and refuting evidence are present." ]
155b9c710419136e17307b80d0a13e68cd46b4ec
# Dataset Card for CLINC150 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Github](https://github.com/clinc/oos-eval/) - **Repository:** [Github](https://github.com/clinc/oos-eval/) - **Paper:** [Aclweb](https://www.aclweb.org/anthology/D19-1131) - **Leaderboard:** [PapersWithCode](https://paperswithcode.com/sota/text-classification-on-clinc-oos) - **Point of Contact:** ### Dataset Summary Task-oriented dialog systems need to know when a query falls outside their range of supported intents, but current text classification corpora only define label sets that cover every example. We introduce a new dataset that includes queries that are out-of-scope (OOS), i.e., queries that do not fall into any of the system's supported intents. This poses a new challenge because models cannot assume that every query at inference time belongs to a system-supported intent class. Our dataset also covers 150 intent classes over 10 domains, capturing the breadth that a production task-oriented agent must handle. It offers a way of more rigorously and realistically benchmarking text classification in task-driven dialog systems. ### Supported Tasks and Leaderboards - `intent-classification`: This dataset is for evaluating the performance of intent classification systems in the presence of "out-of-scope" queries, i.e., queries that do not fall into any of the system-supported intent classes. The dataset includes both in-scope and out-of-scope data. [here](https://paperswithcode.com/sota/text-classification-on-clinc-oos). ### Languages English ## Dataset Structure ### Data Instances A sample from the training set is provided below: ``` { 'text' : 'can you walk me through setting up direct deposits to my bank of internet savings account', 'label' : 108 } ``` ### Data Fields - text : Textual data - label : 150 intent classes over 10 domains, the dataset contains one label for 'out-of-scope' intent. The Label Id to Label Name map is mentioned in the table below: | **Label Id** | **Label name** | |--- |--- | | 0 | restaurant_reviews | | 1 | nutrition_info | | 2 | account_blocked | | 3 | oil_change_how | | 4 | time | | 5 | weather | | 6 | redeem_rewards | | 7 | interest_rate | | 8 | gas_type | | 9 | accept_reservations | | 10 | smart_home | | 11 | user_name | | 12 | report_lost_card | | 13 | repeat | | 14 | whisper_mode | | 15 | what_are_your_hobbies | | 16 | order | | 17 | jump_start | | 18 | schedule_meeting | | 19 | meeting_schedule | | 20 | freeze_account | | 21 | what_song | | 22 | meaning_of_life | | 23 | restaurant_reservation | | 24 | traffic | | 25 | make_call | | 26 | text | | 27 | bill_balance | | 28 | improve_credit_score | | 29 | change_language | | 30 | no | | 31 | measurement_conversion | | 32 | timer | | 33 | flip_coin | | 34 | do_you_have_pets | | 35 | balance | | 36 | tell_joke | | 37 | last_maintenance | | 38 | exchange_rate | | 39 | uber | | 40 | car_rental | | 41 | credit_limit | | 42 | oos | | 43 | shopping_list | | 44 | expiration_date | | 45 | routing | | 46 | meal_suggestion | | 47 | tire_change | | 48 | todo_list | | 49 | card_declined | | 50 | rewards_balance | | 51 | change_accent | | 52 | vaccines | | 53 | reminder_update | | 54 | food_last | | 55 | change_ai_name | | 56 | bill_due | | 57 | who_do_you_work_for | | 58 | share_location | | 59 | international_visa | | 60 | calendar | | 61 | translate | | 62 | carry_on | | 63 | book_flight | | 64 | insurance_change | | 65 | todo_list_update | | 66 | timezone | | 67 | cancel_reservation | | 68 | transactions | | 69 | credit_score | | 70 | report_fraud | | 71 | spending_history | | 72 | directions | | 73 | spelling | | 74 | insurance | | 75 | what_is_your_name | | 76 | reminder | | 77 | where_are_you_from | | 78 | distance | | 79 | payday | | 80 | flight_status | | 81 | find_phone | | 82 | greeting | | 83 | alarm | | 84 | order_status | | 85 | confirm_reservation | | 86 | cook_time | | 87 | damaged_card | | 88 | reset_settings | | 89 | pin_change | | 90 | replacement_card_duration | | 91 | new_card | | 92 | roll_dice | | 93 | income | | 94 | taxes | | 95 | date | | 96 | who_made_you | | 97 | pto_request | | 98 | tire_pressure | | 99 | how_old_are_you | | 100 | rollover_401k | | 101 | pto_request_status | | 102 | how_busy | | 103 | application_status | | 104 | recipe | | 105 | calendar_update | | 106 | play_music | | 107 | yes | | 108 | direct_deposit | | 109 | credit_limit_change | | 110 | gas | | 111 | pay_bill | | 112 | ingredients_list | | 113 | lost_luggage | | 114 | goodbye | | 115 | what_can_i_ask_you | | 116 | book_hotel | | 117 | are_you_a_bot | | 118 | next_song | | 119 | change_speed | | 120 | plug_type | | 121 | maybe | | 122 | w2 | | 123 | oil_change_when | | 124 | thank_you | | 125 | shopping_list_update | | 126 | pto_balance | | 127 | order_checks | | 128 | travel_alert | | 129 | fun_fact | | 130 | sync_device | | 131 | schedule_maintenance | | 132 | apr | | 133 | transfer | | 134 | ingredient_substitution | | 135 | calories | | 136 | current_location | | 137 | international_fees | | 138 | calculator | | 139 | definition | | 140 | next_holiday | | 141 | update_playlist | | 142 | mpg | | 143 | min_payment | | 144 | change_user_name | | 145 | restaurant_suggestion | | 146 | travel_notification | | 147 | cancel | | 148 | pto_used | | 149 | travel_suggestion | | 150 | change_volume | ### Data Splits The dataset comes in different subsets: - `small` : Small, in which there are only 50 training queries per each in-scope intent - `imbalanced` : Imbalanced, in which intents have either 25, 50, 75, or 100 training queries. - `plus`: OOS+, in which there are 250 out-of-scope training examples, rather than 100. | name |train|validation|test| |----------|----:|---------:|---:| |small|7600| 3100| 5500 | |imbalanced|10625| 3100| 5500| |plus|15250| 3100| 5500| ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @inproceedings{larson-etal-2019-evaluation, title = "An Evaluation Dataset for Intent Classification and Out-of-Scope Prediction", author = "Larson, Stefan and Mahendran, Anish and Peper, Joseph J. and Clarke, Christopher and Lee, Andrew and Hill, Parker and Kummerfeld, Jonathan K. and Leach, Kevin and Laurenzano, Michael A. and Tang, Lingjia and Mars, Jason", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", year = "2019", url = "https://www.aclweb.org/anthology/D19-1131" } ``` ### Contributions Thanks to [@sumanthd17](https://github.com/sumanthd17) for adding this dataset.
clinc_oos
[ "task_categories:text-classification", "task_ids:intent-classification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-3.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-3.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["intent-classification"], "paperswithcode_id": "clinc150", "pretty_name": "CLINC150", "dataset_info": [{"config_name": "imbalanced", "features": [{"name": "text", "dtype": "string"}, {"name": "intent", "dtype": {"class_label": {"names": {"0": "restaurant_reviews", "1": "nutrition_info", "2": "account_blocked", "3": "oil_change_how", "4": "time", "5": "weather", "6": "redeem_rewards", "7": "interest_rate", "8": "gas_type", "9": "accept_reservations", "10": "smart_home", "11": "user_name", "12": "report_lost_card", "13": "repeat", "14": "whisper_mode", "15": "what_are_your_hobbies", "16": "order", "17": "jump_start", "18": "schedule_meeting", "19": "meeting_schedule", "20": "freeze_account", "21": "what_song", "22": "meaning_of_life", "23": "restaurant_reservation", "24": "traffic", "25": "make_call", "26": "text", "27": "bill_balance", "28": "improve_credit_score", "29": "change_language", "30": "no", "31": "measurement_conversion", "32": "timer", "33": "flip_coin", "34": "do_you_have_pets", "35": "balance", "36": "tell_joke", "37": "last_maintenance", "38": "exchange_rate", "39": "uber", "40": "car_rental", "41": "credit_limit", "42": "oos", "43": "shopping_list", "44": "expiration_date", "45": "routing", "46": "meal_suggestion", "47": "tire_change", "48": "todo_list", "49": "card_declined", "50": "rewards_balance", "51": "change_accent", "52": "vaccines", "53": "reminder_update", "54": "food_last", "55": "change_ai_name", "56": "bill_due", "57": "who_do_you_work_for", "58": "share_location", "59": "international_visa", "60": "calendar", "61": "translate", "62": "carry_on", "63": "book_flight", "64": "insurance_change", "65": "todo_list_update", "66": "timezone", "67": "cancel_reservation", "68": "transactions", "69": "credit_score", "70": "report_fraud", "71": "spending_history", "72": "directions", "73": "spelling", "74": "insurance", "75": "what_is_your_name", "76": "reminder", "77": "where_are_you_from", "78": "distance", "79": "payday", "80": "flight_status", "81": "find_phone", "82": "greeting", "83": "alarm", "84": "order_status", "85": "confirm_reservation", "86": "cook_time", "87": "damaged_card", "88": "reset_settings", "89": "pin_change", "90": "replacement_card_duration", "91": "new_card", "92": "roll_dice", "93": "income", "94": "taxes", "95": "date", "96": "who_made_you", "97": "pto_request", "98": "tire_pressure", "99": "how_old_are_you", "100": "rollover_401k", "101": "pto_request_status", "102": "how_busy", "103": "application_status", "104": "recipe", "105": "calendar_update", "106": "play_music", "107": "yes", "108": "direct_deposit", "109": "credit_limit_change", "110": "gas", "111": "pay_bill", "112": "ingredients_list", "113": "lost_luggage", "114": "goodbye", "115": "what_can_i_ask_you", "116": "book_hotel", "117": "are_you_a_bot", "118": "next_song", "119": "change_speed", "120": "plug_type", "121": "maybe", "122": "w2", "123": "oil_change_when", "124": "thank_you", "125": "shopping_list_update", "126": "pto_balance", "127": "order_checks", "128": "travel_alert", "129": "fun_fact", "130": "sync_device", "131": "schedule_maintenance", "132": "apr", "133": "transfer", "134": "ingredient_substitution", "135": "calories", "136": "current_location", "137": "international_fees", "138": "calculator", "139": "definition", "140": "next_holiday", "141": "update_playlist", "142": "mpg", "143": "min_payment", "144": "change_user_name", "145": "restaurant_suggestion", "146": "travel_notification", "147": "cancel", "148": "pto_used", "149": "travel_suggestion", "150": "change_volume"}}}}], "splits": [{"name": "train", "num_bytes": 546901, "num_examples": 10625}, {"name": "validation", "num_bytes": 160298, "num_examples": 3100}, {"name": "test", "num_bytes": 286966, "num_examples": 5500}], "download_size": 441918, "dataset_size": 994165}, {"config_name": "plus", "features": [{"name": "text", "dtype": "string"}, {"name": "intent", "dtype": {"class_label": {"names": {"0": "restaurant_reviews", "1": "nutrition_info", "2": "account_blocked", "3": "oil_change_how", "4": "time", "5": "weather", "6": "redeem_rewards", "7": "interest_rate", "8": "gas_type", "9": "accept_reservations", "10": "smart_home", "11": "user_name", "12": "report_lost_card", "13": "repeat", "14": "whisper_mode", "15": "what_are_your_hobbies", "16": "order", "17": "jump_start", "18": "schedule_meeting", "19": "meeting_schedule", "20": "freeze_account", "21": "what_song", "22": "meaning_of_life", "23": "restaurant_reservation", "24": "traffic", "25": "make_call", "26": "text", "27": "bill_balance", "28": "improve_credit_score", "29": "change_language", "30": "no", "31": "measurement_conversion", "32": "timer", "33": "flip_coin", "34": "do_you_have_pets", "35": "balance", "36": "tell_joke", "37": "last_maintenance", "38": "exchange_rate", "39": "uber", "40": "car_rental", "41": "credit_limit", "42": "oos", "43": "shopping_list", "44": "expiration_date", "45": "routing", "46": "meal_suggestion", "47": "tire_change", "48": "todo_list", "49": "card_declined", "50": "rewards_balance", "51": "change_accent", "52": "vaccines", "53": "reminder_update", "54": "food_last", "55": "change_ai_name", "56": "bill_due", "57": "who_do_you_work_for", "58": "share_location", "59": "international_visa", "60": "calendar", "61": "translate", "62": "carry_on", "63": "book_flight", "64": "insurance_change", "65": "todo_list_update", "66": "timezone", "67": "cancel_reservation", "68": "transactions", "69": "credit_score", "70": "report_fraud", "71": "spending_history", "72": "directions", "73": "spelling", "74": "insurance", "75": "what_is_your_name", "76": "reminder", "77": "where_are_you_from", "78": "distance", "79": "payday", "80": "flight_status", "81": "find_phone", "82": "greeting", "83": "alarm", "84": "order_status", "85": "confirm_reservation", "86": "cook_time", "87": "damaged_card", "88": "reset_settings", "89": "pin_change", "90": "replacement_card_duration", "91": "new_card", "92": "roll_dice", "93": "income", "94": "taxes", "95": "date", "96": "who_made_you", "97": "pto_request", "98": "tire_pressure", "99": "how_old_are_you", "100": "rollover_401k", "101": "pto_request_status", "102": "how_busy", "103": "application_status", "104": "recipe", "105": "calendar_update", "106": "play_music", "107": "yes", "108": "direct_deposit", "109": "credit_limit_change", "110": "gas", "111": "pay_bill", "112": "ingredients_list", "113": "lost_luggage", "114": "goodbye", "115": "what_can_i_ask_you", "116": "book_hotel", "117": "are_you_a_bot", "118": "next_song", "119": "change_speed", "120": "plug_type", "121": "maybe", "122": "w2", "123": "oil_change_when", "124": "thank_you", "125": "shopping_list_update", "126": "pto_balance", "127": "order_checks", "128": "travel_alert", "129": "fun_fact", "130": "sync_device", "131": "schedule_maintenance", "132": "apr", "133": "transfer", "134": "ingredient_substitution", "135": "calories", "136": "current_location", "137": "international_fees", "138": "calculator", "139": "definition", "140": "next_holiday", "141": "update_playlist", "142": "mpg", "143": "min_payment", "144": "change_user_name", "145": "restaurant_suggestion", "146": "travel_notification", "147": "cancel", "148": "pto_used", "149": "travel_suggestion", "150": "change_volume"}}}}], "splits": [{"name": "train", "num_bytes": 791247, "num_examples": 15250}, {"name": "validation", "num_bytes": 160298, "num_examples": 3100}, {"name": "test", "num_bytes": 286966, "num_examples": 5500}], "download_size": 525729, "dataset_size": 1238511}, {"config_name": "small", "features": [{"name": "text", "dtype": "string"}, {"name": "intent", "dtype": {"class_label": {"names": {"0": "restaurant_reviews", "1": "nutrition_info", "2": "account_blocked", "3": "oil_change_how", "4": "time", "5": "weather", "6": "redeem_rewards", "7": "interest_rate", "8": "gas_type", "9": "accept_reservations", "10": "smart_home", "11": "user_name", "12": "report_lost_card", "13": "repeat", "14": "whisper_mode", "15": "what_are_your_hobbies", "16": "order", "17": "jump_start", "18": "schedule_meeting", "19": "meeting_schedule", "20": "freeze_account", "21": "what_song", "22": "meaning_of_life", "23": "restaurant_reservation", "24": "traffic", "25": "make_call", "26": "text", "27": "bill_balance", "28": "improve_credit_score", "29": "change_language", "30": "no", "31": "measurement_conversion", "32": "timer", "33": "flip_coin", "34": "do_you_have_pets", "35": "balance", "36": "tell_joke", "37": "last_maintenance", "38": "exchange_rate", "39": "uber", "40": "car_rental", "41": "credit_limit", "42": "oos", "43": "shopping_list", "44": "expiration_date", "45": "routing", "46": "meal_suggestion", "47": "tire_change", "48": "todo_list", "49": "card_declined", "50": "rewards_balance", "51": "change_accent", "52": "vaccines", "53": "reminder_update", "54": "food_last", "55": "change_ai_name", "56": "bill_due", "57": "who_do_you_work_for", "58": "share_location", "59": "international_visa", "60": "calendar", "61": "translate", "62": "carry_on", "63": "book_flight", "64": "insurance_change", "65": "todo_list_update", "66": "timezone", "67": "cancel_reservation", "68": "transactions", "69": "credit_score", "70": "report_fraud", "71": "spending_history", "72": "directions", "73": "spelling", "74": "insurance", "75": "what_is_your_name", "76": "reminder", "77": "where_are_you_from", "78": "distance", "79": "payday", "80": "flight_status", "81": "find_phone", "82": "greeting", "83": "alarm", "84": "order_status", "85": "confirm_reservation", "86": "cook_time", "87": "damaged_card", "88": "reset_settings", "89": "pin_change", "90": "replacement_card_duration", "91": "new_card", "92": "roll_dice", "93": "income", "94": "taxes", "95": "date", "96": "who_made_you", "97": "pto_request", "98": "tire_pressure", "99": "how_old_are_you", "100": "rollover_401k", "101": "pto_request_status", "102": "how_busy", "103": "application_status", "104": "recipe", "105": "calendar_update", "106": "play_music", "107": "yes", "108": "direct_deposit", "109": "credit_limit_change", "110": "gas", "111": "pay_bill", "112": "ingredients_list", "113": "lost_luggage", "114": "goodbye", "115": "what_can_i_ask_you", "116": "book_hotel", "117": "are_you_a_bot", "118": "next_song", "119": "change_speed", "120": "plug_type", "121": "maybe", "122": "w2", "123": "oil_change_when", "124": "thank_you", "125": "shopping_list_update", "126": "pto_balance", "127": "order_checks", "128": "travel_alert", "129": "fun_fact", "130": "sync_device", "131": "schedule_maintenance", "132": "apr", "133": "transfer", "134": "ingredient_substitution", "135": "calories", "136": "current_location", "137": "international_fees", "138": "calculator", "139": "definition", "140": "next_holiday", "141": "update_playlist", "142": "mpg", "143": "min_payment", "144": "change_user_name", "145": "restaurant_suggestion", "146": "travel_notification", "147": "cancel", "148": "pto_used", "149": "travel_suggestion", "150": "change_volume"}}}}], "splits": [{"name": "train", "num_bytes": 394124, "num_examples": 7600}, {"name": "validation", "num_bytes": 160298, "num_examples": 3100}, {"name": "test", "num_bytes": 286966, "num_examples": 5500}], "download_size": 385185, "dataset_size": 841388}], "configs": [{"config_name": "imbalanced", "data_files": [{"split": "train", "path": "imbalanced/train-*"}, {"split": "validation", "path": "imbalanced/validation-*"}, {"split": "test", "path": "imbalanced/test-*"}]}, {"config_name": "plus", "data_files": [{"split": "train", "path": "plus/train-*"}, {"split": "validation", "path": "plus/validation-*"}, {"split": "test", "path": "plus/test-*"}]}, {"config_name": "small", "data_files": [{"split": "train", "path": "small/train-*"}, {"split": "validation", "path": "small/validation-*"}, {"split": "test", "path": "small/test-*"}]}]}
2024-01-18T14:33:10+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-intent-classification #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-3.0 #region-us
Dataset Card for CLINC150 ========================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: Github * Repository: Github * Paper: Aclweb * Leaderboard: PapersWithCode * Point of Contact: ### Dataset Summary Task-oriented dialog systems need to know when a query falls outside their range of supported intents, but current text classification corpora only define label sets that cover every example. We introduce a new dataset that includes queries that are out-of-scope (OOS), i.e., queries that do not fall into any of the system's supported intents. This poses a new challenge because models cannot assume that every query at inference time belongs to a system-supported intent class. Our dataset also covers 150 intent classes over 10 domains, capturing the breadth that a production task-oriented agent must handle. It offers a way of more rigorously and realistically benchmarking text classification in task-driven dialog systems. ### Supported Tasks and Leaderboards * 'intent-classification': This dataset is for evaluating the performance of intent classification systems in the presence of "out-of-scope" queries, i.e., queries that do not fall into any of the system-supported intent classes. The dataset includes both in-scope and out-of-scope data. here. ### Languages English Dataset Structure ----------------- ### Data Instances A sample from the training set is provided below: ### Data Fields * text : Textual data * label : 150 intent classes over 10 domains, the dataset contains one label for 'out-of-scope' intent. The Label Id to Label Name map is mentioned in the table below: ### Data Splits The dataset comes in different subsets: * 'small' : Small, in which there are only 50 training queries per each in-scope intent * 'imbalanced' : Imbalanced, in which intents have either 25, 50, 75, or 100 training queries. * 'plus': OOS+, in which there are 250 out-of-scope training examples, rather than 100. Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @sumanthd17 for adding this dataset.
[ "### Dataset Summary\n\n\nTask-oriented dialog systems need to know when a query falls outside their range of supported intents, but current text classification corpora only define label sets that cover every example. We introduce a new dataset that includes queries that are out-of-scope (OOS), i.e., queries that do not fall into any of the system's supported intents. This poses a new challenge because models cannot assume that every query at inference time belongs to a system-supported intent class. Our dataset also covers 150 intent classes over 10 domains, capturing the breadth that a production task-oriented agent must handle. It offers a way of more rigorously and realistically benchmarking text classification in task-driven dialog systems.", "### Supported Tasks and Leaderboards\n\n\n* 'intent-classification': This dataset is for evaluating the performance of intent classification systems in the presence of \"out-of-scope\" queries, i.e., queries that do not fall into any of the system-supported intent classes. The dataset includes both in-scope and out-of-scope data. here.", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from the training set is provided below:", "### Data Fields\n\n\n* text : Textual data\n* label : 150 intent classes over 10 domains, the dataset contains one label for 'out-of-scope' intent.\n\n\nThe Label Id to Label Name map is mentioned in the table below:", "### Data Splits\n\n\nThe dataset comes in different subsets:\n\n\n* 'small' : Small, in which there are only 50 training queries per each in-scope intent\n* 'imbalanced' : Imbalanced, in which intents have either 25, 50, 75, or 100 training queries.\n* 'plus': OOS+, in which there are 250 out-of-scope training examples, rather than 100.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @sumanthd17 for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-intent-classification #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-3.0 #region-us \n", "### Dataset Summary\n\n\nTask-oriented dialog systems need to know when a query falls outside their range of supported intents, but current text classification corpora only define label sets that cover every example. We introduce a new dataset that includes queries that are out-of-scope (OOS), i.e., queries that do not fall into any of the system's supported intents. This poses a new challenge because models cannot assume that every query at inference time belongs to a system-supported intent class. Our dataset also covers 150 intent classes over 10 domains, capturing the breadth that a production task-oriented agent must handle. It offers a way of more rigorously and realistically benchmarking text classification in task-driven dialog systems.", "### Supported Tasks and Leaderboards\n\n\n* 'intent-classification': This dataset is for evaluating the performance of intent classification systems in the presence of \"out-of-scope\" queries, i.e., queries that do not fall into any of the system-supported intent classes. The dataset includes both in-scope and out-of-scope data. here.", "### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from the training set is provided below:", "### Data Fields\n\n\n* text : Textual data\n* label : 150 intent classes over 10 domains, the dataset contains one label for 'out-of-scope' intent.\n\n\nThe Label Id to Label Name map is mentioned in the table below:", "### Data Splits\n\n\nThe dataset comes in different subsets:\n\n\n* 'small' : Small, in which there are only 50 training queries per each in-scope intent\n* 'imbalanced' : Imbalanced, in which intents have either 25, 50, 75, or 100 training queries.\n* 'plus': OOS+, in which there are 250 out-of-scope training examples, rather than 100.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @sumanthd17 for adding this dataset." ]
[ 93, 173, 87, 12, 16, 53, 100, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 18 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-intent-classification #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-3.0 #region-us \n### Dataset Summary\n\n\nTask-oriented dialog systems need to know when a query falls outside their range of supported intents, but current text classification corpora only define label sets that cover every example. We introduce a new dataset that includes queries that are out-of-scope (OOS), i.e., queries that do not fall into any of the system's supported intents. This poses a new challenge because models cannot assume that every query at inference time belongs to a system-supported intent class. Our dataset also covers 150 intent classes over 10 domains, capturing the breadth that a production task-oriented agent must handle. It offers a way of more rigorously and realistically benchmarking text classification in task-driven dialog systems.### Supported Tasks and Leaderboards\n\n\n* 'intent-classification': This dataset is for evaluating the performance of intent classification systems in the presence of \"out-of-scope\" queries, i.e., queries that do not fall into any of the system-supported intent classes. The dataset includes both in-scope and out-of-scope data. here.### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nA sample from the training set is provided below:### Data Fields\n\n\n* text : Textual data\n* label : 150 intent classes over 10 domains, the dataset contains one label for 'out-of-scope' intent.\n\n\nThe Label Id to Label Name map is mentioned in the table below:" ]
28178267a609dd08bdc703dd6c931dfc2c2f4431
# Dataset Card for "clue" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.cluebenchmarks.com - **Repository:** https://github.com/CLUEbenchmark/CLUE - **Paper:** [CLUE: A Chinese Language Understanding Evaluation Benchmark](https://aclanthology.org/2020.coling-main.419/) - **Paper:** https://arxiv.org/abs/2004.05986 - **Point of Contact:** [Zhenzhong Lan](mailto:lanzhenzhong@westlake.edu.cn) - **Size of downloaded dataset files:** 198.68 MB - **Size of the generated dataset:** 486.34 MB - **Total amount of disk used:** 685.02 MB ### Dataset Summary CLUE, A Chinese Language Understanding Evaluation Benchmark (https://www.cluebenchmarks.com/) is a collection of resources for training, evaluating, and analyzing Chinese language understanding systems. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### afqmc - **Size of downloaded dataset files:** 1.20 MB - **Size of the generated dataset:** 4.20 MB - **Total amount of disk used:** 5.40 MB An example of 'validation' looks as follows. ``` { "idx": 0, "label": 0, "sentence1": "双十一花呗提额在哪", "sentence2": "里可以提花呗额度" } ``` #### c3 - **Size of downloaded dataset files:** 3.20 MB - **Size of the generated dataset:** 15.69 MB - **Total amount of disk used:** 18.90 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "answer": "比人的灵敏", "choice": ["没有人的灵敏", "和人的差不多", "和人的一样好", "比人的灵敏"], "context": "[\"许多动物的某些器官感觉特别灵敏,它们能比人类提前知道一些灾害事件的发生,例如,海洋中的水母能预报风暴,老鼠能事先躲避矿井崩塌或有害气体,等等。地震往往能使一些动物的某些感觉器官受到刺激而发生异常反应。如一个地区的重力发生变异,某些动物可能通过它们的平衡...", "id": 1, "question": "动物的器官感觉与人的相比有什么不同?" } ``` #### chid - **Size of downloaded dataset files:** 139.20 MB - **Size of the generated dataset:** 274.08 MB - **Total amount of disk used:** 413.28 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "answers": { "candidate_id": [3, 5, 6, 1, 7, 4, 0], "text": ["碌碌无为", "无所作为", "苦口婆心", "得过且过", "未雨绸缪", "软硬兼施", "传宗接代"] }, "candidates": "[\"传宗接代\", \"得过且过\", \"咄咄逼人\", \"碌碌无为\", \"软硬兼施\", \"无所作为\", \"苦口婆心\", \"未雨绸缪\", \"和衷共济\", \"人老珠黄\"]...", "content": "[\"谈到巴萨目前的成就,瓜迪奥拉用了“坚持”两个字来形容。自从上世纪90年代克鲁伊夫带队以来,巴萨就坚持每年都有拉玛西亚球员进入一队的传统。即便是范加尔时代,巴萨强力推出的“巴萨五鹰”德拉·佩纳、哈维、莫雷罗、罗杰·加西亚和贝拉乌桑几乎#idiom0000...", "idx": 0 } ``` #### cluewsc2020 - **Size of downloaded dataset files:** 0.28 MB - **Size of the generated dataset:** 1.03 MB - **Total amount of disk used:** 1.29 MB An example of 'train' looks as follows. ``` { "idx": 0, "label": 1, "target": { "span1_index": 3, "span1_text": "伤口", "span2_index": 27, "span2_text": "它们" }, "text": "裂开的伤口涂满尘土,里面有碎石子和木头刺,我小心翼翼把它们剔除出去。" } ``` #### cmnli - **Size of downloaded dataset files:** 31.40 MB - **Size of the generated dataset:** 72.12 MB - **Total amount of disk used:** 103.53 MB An example of 'train' looks as follows. ``` { "idx": 0, "label": 0, "sentence1": "从概念上讲,奶油略读有两个基本维度-产品和地理。", "sentence2": "产品和地理位置是使奶油撇油起作用的原因。" } ``` ### Data Fields The data fields are the same among all splits. #### afqmc - `sentence1`: a `string` feature. - `sentence2`: a `string` feature. - `label`: a classification label, with possible values including `0` (0), `1` (1). - `idx`: a `int32` feature. #### c3 - `id`: a `int32` feature. - `context`: a `list` of `string` features. - `question`: a `string` feature. - `choice`: a `list` of `string` features. - `answer`: a `string` feature. #### chid - `idx`: a `int32` feature. - `candidates`: a `list` of `string` features. - `content`: a `list` of `string` features. - `answers`: a dictionary feature containing: - `text`: a `string` feature. - `candidate_id`: a `int32` feature. #### cluewsc2020 - `idx`: a `int32` feature. - `text`: a `string` feature. - `label`: a classification label, with possible values including `true` (0), `false` (1). - `span1_text`: a `string` feature. - `span2_text`: a `string` feature. - `span1_index`: a `int32` feature. - `span2_index`: a `int32` feature. #### cmnli - `sentence1`: a `string` feature. - `sentence2`: a `string` feature. - `label`: a classification label, with possible values including `neutral` (0), `entailment` (1), `contradiction` (2). - `idx`: a `int32` feature. ### Data Splits | name |train |validation|test | |-----------|-----:|---------:|----:| |afqmc | 34334| 4316| 3861| |c3 | 11869| 3816| 3892| |chid | 84709| 3218| 3231| |cluewsc2020| 1244| 304| 290| |cmnli |391783| 12241|13880| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{xu-etal-2020-clue, title = "{CLUE}: A {C}hinese Language Understanding Evaluation Benchmark", author = "Xu, Liang and Hu, Hai and Zhang, Xuanwei and Li, Lu and Cao, Chenjie and Li, Yudong and Xu, Yechen and Sun, Kai and Yu, Dian and Yu, Cong and Tian, Yin and Dong, Qianqian and Liu, Weitang and Shi, Bo and Cui, Yiming and Li, Junyi and Zeng, Jun and Wang, Rongzhao and Xie, Weijian and Li, Yanting and Patterson, Yina and Tian, Zuoyu and Zhang, Yiwen and Zhou, He and Liu, Shaoweihua and Zhao, Zhe and Zhao, Qipeng and Yue, Cong and Zhang, Xinrui and Yang, Zhengliang and Richardson, Kyle and Lan, Zhenzhong", booktitle = "Proceedings of the 28th International Conference on Computational Linguistics", month = dec, year = "2020", address = "Barcelona, Spain (Online)", publisher = "International Committee on Computational Linguistics", url = "https://aclanthology.org/2020.coling-main.419", doi = "10.18653/v1/2020.coling-main.419", pages = "4762--4772", } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@JetRunner](https://github.com/JetRunner) for adding this dataset.
clue
[ "task_categories:text-classification", "task_categories:multiple-choice", "task_ids:topic-classification", "task_ids:semantic-similarity-scoring", "task_ids:natural-language-inference", "task_ids:multiple-choice-qa", "annotations_creators:other", "language_creators:other", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:zh", "license:unknown", "coreference-nli", "qa-nli", "arxiv:2004.05986", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["other"], "language_creators": ["other"], "language": ["zh"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification", "multiple-choice"], "task_ids": ["topic-classification", "semantic-similarity-scoring", "natural-language-inference", "multiple-choice-qa"], "paperswithcode_id": "clue", "pretty_name": "CLUE: Chinese Language Understanding Evaluation benchmark", "tags": ["coreference-nli", "qa-nli"], "dataset_info": [{"config_name": "afqmc", "features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "test", "num_bytes": 378718, "num_examples": 3861}, {"name": "train", "num_bytes": 3396503, "num_examples": 34334}, {"name": "validation", "num_bytes": 426285, "num_examples": 4316}], "download_size": 2337418, "dataset_size": 4201506}, {"config_name": "c3", "features": [{"name": "id", "dtype": "int32"}, {"name": "context", "sequence": "string"}, {"name": "question", "dtype": "string"}, {"name": "choice", "sequence": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1600142, "num_examples": 1625}, {"name": "train", "num_bytes": 9672739, "num_examples": 11869}, {"name": "validation", "num_bytes": 2990943, "num_examples": 3816}], "download_size": 4718960, "dataset_size": 14263824}, {"config_name": "chid", "features": [{"name": "idx", "dtype": "int32"}, {"name": "candidates", "sequence": "string"}, {"name": "content", "sequence": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "candidate_id", "dtype": "int32"}]}], "splits": [{"name": "test", "num_bytes": 11480435, "num_examples": 3447}, {"name": "train", "num_bytes": 252477926, "num_examples": 84709}, {"name": "validation", "num_bytes": 10117761, "num_examples": 3218}], "download_size": 198468807, "dataset_size": 274076122}, {"config_name": "cluewsc2020", "features": [{"name": "idx", "dtype": "int32"}, {"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "true", "1": "false"}}}}, {"name": "target", "struct": [{"name": "span1_text", "dtype": "string"}, {"name": "span2_text", "dtype": "string"}, {"name": "span1_index", "dtype": "int32"}, {"name": "span2_index", "dtype": "int32"}]}], "splits": [{"name": "test", "num_bytes": 645637, "num_examples": 2574}, {"name": "train", "num_bytes": 288816, "num_examples": 1244}, {"name": "validation", "num_bytes": 72670, "num_examples": 304}], "download_size": 380611, "dataset_size": 1007123}, {"config_name": "cmnli", "features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "neutral", "1": "entailment", "2": "contradiction"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "test", "num_bytes": 2386821, "num_examples": 13880}, {"name": "train", "num_bytes": 67684989, "num_examples": 391783}, {"name": "validation", "num_bytes": 2051829, "num_examples": 12241}], "download_size": 54234919, "dataset_size": 72123639}, {"config_name": "cmrc2018", "features": [{"name": "id", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "splits": [{"name": "test", "num_bytes": 3112042, "num_examples": 2000}, {"name": "train", "num_bytes": 15508062, "num_examples": 10142}, {"name": "validation", "num_bytes": 5183785, "num_examples": 3219}, {"name": "trial", "num_bytes": 1606907, "num_examples": 1002}], "download_size": 5459001, "dataset_size": 25410796}, {"config_name": "csl", "features": [{"name": "idx", "dtype": "int32"}, {"name": "corpus_id", "dtype": "int32"}, {"name": "abst", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}, {"name": "keyword", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 2463728, "num_examples": 3000}, {"name": "train", "num_bytes": 16478890, "num_examples": 20000}, {"name": "validation", "num_bytes": 2464563, "num_examples": 3000}], "download_size": 3936111, "dataset_size": 21407181}, {"config_name": "diagnostics", "features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "neutral", "1": "entailment", "2": "contradiction"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "test", "num_bytes": 42392, "num_examples": 514}], "download_size": 23000, "dataset_size": 42392}, {"config_name": "drcd", "features": [{"name": "id", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "splits": [{"name": "test", "num_bytes": 4982378, "num_examples": 3493}, {"name": "train", "num_bytes": 37443386, "num_examples": 26936}, {"name": "validation", "num_bytes": 5222729, "num_examples": 3524}], "download_size": 11188875, "dataset_size": 47648493}, {"config_name": "iflytek", "features": [{"name": "sentence", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1", "2": "2", "3": "3", "4": "4", "5": "5", "6": "6", "7": "7", "8": "8", "9": "9", "10": "10", "11": "11", "12": "12", "13": "13", "14": "14", "15": "15", "16": "16", "17": "17", "18": "18", "19": "19", "20": "20", "21": "21", "22": "22", "23": "23", "24": "24", "25": "25", "26": "26", "27": "27", "28": "28", "29": "29", "30": "30", "31": "31", "32": "32", "33": "33", "34": "34", "35": "35", "36": "36", "37": "37", "38": "38", "39": "39", "40": "40", "41": "41", "42": "42", "43": "43", "44": "44", "45": "45", "46": "46", "47": "47", "48": "48", "49": "49", "50": "50", "51": "51", "52": "52", "53": "53", "54": "54", "55": "55", "56": "56", "57": "57", "58": "58", "59": "59", "60": "60", "61": "61", "62": "62", "63": "63", "64": "64", "65": "65", "66": "66", "67": "67", "68": "68", "69": "69", "70": "70", "71": "71", "72": "72", "73": "73", "74": "74", "75": "75", "76": "76", "77": "77", "78": "78", "79": "79", "80": "80", "81": "81", "82": "82", "83": "83", "84": "84", "85": "85", "86": "86", "87": "87", "88": "88", "89": "89", "90": "90", "91": "91", "92": "92", "93": "93", "94": "94", "95": "95", "96": "96", "97": "97", "98": "98", "99": "99", "100": "100", "101": "101", "102": "102", "103": "103", "104": "104", "105": "105", "106": "106", "107": "107", "108": "108", "109": "109", "110": "110", "111": "111", "112": "112", "113": "113", "114": "114", "115": "115", "116": "116", "117": "117", "118": "118"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "test", "num_bytes": 2105684, "num_examples": 2600}, {"name": "train", "num_bytes": 10028605, "num_examples": 12133}, {"name": "validation", "num_bytes": 2157119, "num_examples": 2599}], "download_size": 9777855, "dataset_size": 14291408}, {"config_name": "ocnli", "features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "neutral", "1": "entailment", "2": "contradiction"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "test", "num_bytes": 376058, "num_examples": 3000}, {"name": "train", "num_bytes": 6187142, "num_examples": 50437}, {"name": "validation", "num_bytes": 366227, "num_examples": 2950}], "download_size": 3000218, "dataset_size": 6929427}, {"config_name": "tnews", "features": [{"name": "sentence", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "100", "1": "101", "2": "102", "3": "103", "4": "104", "5": "106", "6": "107", "7": "108", "8": "109", "9": "110", "10": "112", "11": "113", "12": "114", "13": "115", "14": "116"}}}}, {"name": "idx", "dtype": "int32"}], "splits": [{"name": "test", "num_bytes": 810970, "num_examples": 10000}, {"name": "train", "num_bytes": 4245677, "num_examples": 53360}, {"name": "validation", "num_bytes": 797922, "num_examples": 10000}], "download_size": 4697843, "dataset_size": 5854569}], "configs": [{"config_name": "afqmc", "data_files": [{"split": "test", "path": "afqmc/test-*"}, {"split": "train", "path": "afqmc/train-*"}, {"split": "validation", "path": "afqmc/validation-*"}]}, {"config_name": "c3", "data_files": [{"split": "test", "path": "c3/test-*"}, {"split": "train", "path": "c3/train-*"}, {"split": "validation", "path": "c3/validation-*"}]}, {"config_name": "chid", "data_files": [{"split": "test", "path": "chid/test-*"}, {"split": "train", "path": "chid/train-*"}, {"split": "validation", "path": "chid/validation-*"}]}, {"config_name": "cluewsc2020", "data_files": [{"split": "test", "path": "cluewsc2020/test-*"}, {"split": "train", "path": "cluewsc2020/train-*"}, {"split": "validation", "path": "cluewsc2020/validation-*"}]}, {"config_name": "cmnli", "data_files": [{"split": "test", "path": "cmnli/test-*"}, {"split": "train", "path": "cmnli/train-*"}, {"split": "validation", "path": "cmnli/validation-*"}]}, {"config_name": "cmrc2018", "data_files": [{"split": "test", "path": "cmrc2018/test-*"}, {"split": "train", "path": "cmrc2018/train-*"}, {"split": "validation", "path": "cmrc2018/validation-*"}, {"split": "trial", "path": "cmrc2018/trial-*"}]}, {"config_name": "csl", "data_files": [{"split": "test", "path": "csl/test-*"}, {"split": "train", "path": "csl/train-*"}, {"split": "validation", "path": "csl/validation-*"}]}, {"config_name": "diagnostics", "data_files": [{"split": "test", "path": "diagnostics/test-*"}]}, {"config_name": "drcd", "data_files": [{"split": "test", "path": "drcd/test-*"}, {"split": "train", "path": "drcd/train-*"}, {"split": "validation", "path": "drcd/validation-*"}]}, {"config_name": "iflytek", "data_files": [{"split": "test", "path": "iflytek/test-*"}, {"split": "train", "path": "iflytek/train-*"}, {"split": "validation", "path": "iflytek/validation-*"}]}, {"config_name": "ocnli", "data_files": [{"split": "test", "path": "ocnli/test-*"}, {"split": "train", "path": "ocnli/train-*"}, {"split": "validation", "path": "ocnli/validation-*"}]}, {"config_name": "tnews", "data_files": [{"split": "test", "path": "tnews/test-*"}, {"split": "train", "path": "tnews/train-*"}, {"split": "validation", "path": "tnews/validation-*"}]}]}
2024-01-17T07:48:08+00:00
[ "2004.05986" ]
[ "zh" ]
TAGS #task_categories-text-classification #task_categories-multiple-choice #task_ids-topic-classification #task_ids-semantic-similarity-scoring #task_ids-natural-language-inference #task_ids-multiple-choice-qa #annotations_creators-other #language_creators-other #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Chinese #license-unknown #coreference-nli #qa-nli #arxiv-2004.05986 #region-us
Dataset Card for "clue" ======================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: CLUE: A Chinese Language Understanding Evaluation Benchmark * Paper: URL * Point of Contact: Zhenzhong Lan * Size of downloaded dataset files: 198.68 MB * Size of the generated dataset: 486.34 MB * Total amount of disk used: 685.02 MB ### Dataset Summary CLUE, A Chinese Language Understanding Evaluation Benchmark (URL is a collection of resources for training, evaluating, and analyzing Chinese language understanding systems. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### afqmc * Size of downloaded dataset files: 1.20 MB * Size of the generated dataset: 4.20 MB * Total amount of disk used: 5.40 MB An example of 'validation' looks as follows. #### c3 * Size of downloaded dataset files: 3.20 MB * Size of the generated dataset: 15.69 MB * Total amount of disk used: 18.90 MB An example of 'train' looks as follows. #### chid * Size of downloaded dataset files: 139.20 MB * Size of the generated dataset: 274.08 MB * Total amount of disk used: 413.28 MB An example of 'train' looks as follows. #### cluewsc2020 * Size of downloaded dataset files: 0.28 MB * Size of the generated dataset: 1.03 MB * Total amount of disk used: 1.29 MB An example of 'train' looks as follows. #### cmnli * Size of downloaded dataset files: 31.40 MB * Size of the generated dataset: 72.12 MB * Total amount of disk used: 103.53 MB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### afqmc * 'sentence1': a 'string' feature. * 'sentence2': a 'string' feature. * 'label': a classification label, with possible values including '0' (0), '1' (1). * 'idx': a 'int32' feature. #### c3 * 'id': a 'int32' feature. * 'context': a 'list' of 'string' features. * 'question': a 'string' feature. * 'choice': a 'list' of 'string' features. * 'answer': a 'string' feature. #### chid * 'idx': a 'int32' feature. * 'candidates': a 'list' of 'string' features. * 'content': a 'list' of 'string' features. * 'answers': a dictionary feature containing: + 'text': a 'string' feature. + 'candidate\_id': a 'int32' feature. #### cluewsc2020 * 'idx': a 'int32' feature. * 'text': a 'string' feature. * 'label': a classification label, with possible values including 'true' (0), 'false' (1). * 'span1\_text': a 'string' feature. * 'span2\_text': a 'string' feature. * 'span1\_index': a 'int32' feature. * 'span2\_index': a 'int32' feature. #### cmnli * 'sentence1': a 'string' feature. * 'sentence2': a 'string' feature. * 'label': a classification label, with possible values including 'neutral' (0), 'entailment' (1), 'contradiction' (2). * 'idx': a 'int32' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @thomwolf, @JetRunner for adding this dataset.
[ "### Dataset Summary\n\n\nCLUE, A Chinese Language Understanding Evaluation Benchmark\n(URL is a collection of resources for training,\nevaluating, and analyzing Chinese language understanding systems.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### afqmc\n\n\n* Size of downloaded dataset files: 1.20 MB\n* Size of the generated dataset: 4.20 MB\n* Total amount of disk used: 5.40 MB\n\n\nAn example of 'validation' looks as follows.", "#### c3\n\n\n* Size of downloaded dataset files: 3.20 MB\n* Size of the generated dataset: 15.69 MB\n* Total amount of disk used: 18.90 MB\n\n\nAn example of 'train' looks as follows.", "#### chid\n\n\n* Size of downloaded dataset files: 139.20 MB\n* Size of the generated dataset: 274.08 MB\n* Total amount of disk used: 413.28 MB\n\n\nAn example of 'train' looks as follows.", "#### cluewsc2020\n\n\n* Size of downloaded dataset files: 0.28 MB\n* Size of the generated dataset: 1.03 MB\n* Total amount of disk used: 1.29 MB\n\n\nAn example of 'train' looks as follows.", "#### cmnli\n\n\n* Size of downloaded dataset files: 31.40 MB\n* Size of the generated dataset: 72.12 MB\n* Total amount of disk used: 103.53 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### afqmc\n\n\n* 'sentence1': a 'string' feature.\n* 'sentence2': a 'string' feature.\n* 'label': a classification label, with possible values including '0' (0), '1' (1).\n* 'idx': a 'int32' feature.", "#### c3\n\n\n* 'id': a 'int32' feature.\n* 'context': a 'list' of 'string' features.\n* 'question': a 'string' feature.\n* 'choice': a 'list' of 'string' features.\n* 'answer': a 'string' feature.", "#### chid\n\n\n* 'idx': a 'int32' feature.\n* 'candidates': a 'list' of 'string' features.\n* 'content': a 'list' of 'string' features.\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'candidate\\_id': a 'int32' feature.", "#### cluewsc2020\n\n\n* 'idx': a 'int32' feature.\n* 'text': a 'string' feature.\n* 'label': a classification label, with possible values including 'true' (0), 'false' (1).\n* 'span1\\_text': a 'string' feature.\n* 'span2\\_text': a 'string' feature.\n* 'span1\\_index': a 'int32' feature.\n* 'span2\\_index': a 'int32' feature.", "#### cmnli\n\n\n* 'sentence1': a 'string' feature.\n* 'sentence2': a 'string' feature.\n* 'label': a classification label, with possible values including 'neutral' (0), 'entailment' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @thomwolf, @JetRunner for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_categories-multiple-choice #task_ids-topic-classification #task_ids-semantic-similarity-scoring #task_ids-natural-language-inference #task_ids-multiple-choice-qa #annotations_creators-other #language_creators-other #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Chinese #license-unknown #coreference-nli #qa-nli #arxiv-2004.05986 #region-us \n", "### Dataset Summary\n\n\nCLUE, A Chinese Language Understanding Evaluation Benchmark\n(URL is a collection of resources for training,\nevaluating, and analyzing Chinese language understanding systems.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### afqmc\n\n\n* Size of downloaded dataset files: 1.20 MB\n* Size of the generated dataset: 4.20 MB\n* Total amount of disk used: 5.40 MB\n\n\nAn example of 'validation' looks as follows.", "#### c3\n\n\n* Size of downloaded dataset files: 3.20 MB\n* Size of the generated dataset: 15.69 MB\n* Total amount of disk used: 18.90 MB\n\n\nAn example of 'train' looks as follows.", "#### chid\n\n\n* Size of downloaded dataset files: 139.20 MB\n* Size of the generated dataset: 274.08 MB\n* Total amount of disk used: 413.28 MB\n\n\nAn example of 'train' looks as follows.", "#### cluewsc2020\n\n\n* Size of downloaded dataset files: 0.28 MB\n* Size of the generated dataset: 1.03 MB\n* Total amount of disk used: 1.29 MB\n\n\nAn example of 'train' looks as follows.", "#### cmnli\n\n\n* Size of downloaded dataset files: 31.40 MB\n* Size of the generated dataset: 72.12 MB\n* Total amount of disk used: 103.53 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### afqmc\n\n\n* 'sentence1': a 'string' feature.\n* 'sentence2': a 'string' feature.\n* 'label': a classification label, with possible values including '0' (0), '1' (1).\n* 'idx': a 'int32' feature.", "#### c3\n\n\n* 'id': a 'int32' feature.\n* 'context': a 'list' of 'string' features.\n* 'question': a 'string' feature.\n* 'choice': a 'list' of 'string' features.\n* 'answer': a 'string' feature.", "#### chid\n\n\n* 'idx': a 'int32' feature.\n* 'candidates': a 'list' of 'string' features.\n* 'content': a 'list' of 'string' features.\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'candidate\\_id': a 'int32' feature.", "#### cluewsc2020\n\n\n* 'idx': a 'int32' feature.\n* 'text': a 'string' feature.\n* 'label': a classification label, with possible values including 'true' (0), 'false' (1).\n* 'span1\\_text': a 'string' feature.\n* 'span2\\_text': a 'string' feature.\n* 'span1\\_index': a 'int32' feature.\n* 'span2\\_index': a 'int32' feature.", "#### cmnli\n\n\n* 'sentence1': a 'string' feature.\n* 'sentence2': a 'string' feature.\n* 'label': a classification label, with possible values including 'neutral' (0), 'entailment' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @thomwolf, @JetRunner for adding this dataset." ]
[ 158, 41, 10, 11, 6, 53, 50, 54, 53, 53, 17, 69, 72, 91, 119, 77, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 23 ]
[ "passage: TAGS\n#task_categories-text-classification #task_categories-multiple-choice #task_ids-topic-classification #task_ids-semantic-similarity-scoring #task_ids-natural-language-inference #task_ids-multiple-choice-qa #annotations_creators-other #language_creators-other #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Chinese #license-unknown #coreference-nli #qa-nli #arxiv-2004.05986 #region-us \n### Dataset Summary\n\n\nCLUE, A Chinese Language Understanding Evaluation Benchmark\n(URL is a collection of resources for training,\nevaluating, and analyzing Chinese language understanding systems.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### afqmc\n\n\n* Size of downloaded dataset files: 1.20 MB\n* Size of the generated dataset: 4.20 MB\n* Total amount of disk used: 5.40 MB\n\n\nAn example of 'validation' looks as follows.#### c3\n\n\n* Size of downloaded dataset files: 3.20 MB\n* Size of the generated dataset: 15.69 MB\n* Total amount of disk used: 18.90 MB\n\n\nAn example of 'train' looks as follows.#### chid\n\n\n* Size of downloaded dataset files: 139.20 MB\n* Size of the generated dataset: 274.08 MB\n* Total amount of disk used: 413.28 MB\n\n\nAn example of 'train' looks as follows.#### cluewsc2020\n\n\n* Size of downloaded dataset files: 0.28 MB\n* Size of the generated dataset: 1.03 MB\n* Total amount of disk used: 1.29 MB\n\n\nAn example of 'train' looks as follows.#### cmnli\n\n\n* Size of downloaded dataset files: 31.40 MB\n* Size of the generated dataset: 72.12 MB\n* Total amount of disk used: 103.53 MB\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits." ]
b3b41993f1d1723ecc4dcf5731d68a048c9bbc5f
# Dataset Card for "cmrc2018" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/ymcui/cmrc2018](https://github.com/ymcui/cmrc2018) - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 11.50 MB - **Size of the generated dataset:** 22.31 MB - **Total amount of disk used:** 33.83 MB ### Dataset Summary A Span-Extraction dataset for Chinese machine reading comprehension to add language diversities in this area. The dataset is composed by near 20,000 real questions annotated on Wikipedia paragraphs by human experts. We also annotated a challenge set which contains the questions that need comprehensive understanding and multi-sentence inference throughout the context. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 11.50 MB - **Size of the generated dataset:** 22.31 MB - **Total amount of disk used:** 33.83 MB An example of 'validation' looks as follows. ``` This example was too long and was cropped: { "answers": { "answer_start": [11, 11], "text": ["光荣和ω-force", "光荣和ω-force"] }, "context": "\"《战国无双3》()是由光荣和ω-force开发的战国无双系列的正统第三续作。本作以三大故事为主轴,分别是以武田信玄等人为主的《关东三国志》,织田信长等人为主的《战国三杰》,石田三成等人为主的《关原的年轻武者》,丰富游戏内的剧情。此部份专门介绍角色,欲知武...", "id": "DEV_0_QUERY_0", "question": "《战国无双3》是由哪两个公司合作开发的?" } ``` ### Data Fields The data fields are the same among all splits. #### default - `id`: a `string` feature. - `context`: a `string` feature. - `question`: a `string` feature. - `answers`: a dictionary feature containing: - `text`: a `string` feature. - `answer_start`: a `int32` feature. ### Data Splits | name | train | validation | test | | ------- | ----: | ---------: | ---: | | default | 10142 | 3219 | 1002 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @inproceedings{cui-emnlp2019-cmrc2018, title = "A Span-Extraction Dataset for {C}hinese Machine Reading Comprehension", author = "Cui, Yiming and Liu, Ting and Che, Wanxiang and Xiao, Li and Chen, Zhipeng and Ma, Wentao and Wang, Shijin and Hu, Guoping", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", month = nov, year = "2019", address = "Hong Kong, China", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D19-1600", doi = "10.18653/v1/D19-1600", pages = "5886--5891", } ``` ### Contributions Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
cmrc2018
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:zh", "license:cc-by-sa-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["zh"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"], "paperswithcode_id": "cmrc-2018", "pretty_name": "Chinese Machine Reading Comprehension 2018", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "splits": [{"name": "train", "num_bytes": 15508110, "num_examples": 10142}, {"name": "validation", "num_bytes": 5183809, "num_examples": 3219}, {"name": "test", "num_bytes": 1606931, "num_examples": 1002}], "download_size": 11508117, "dataset_size": 22298850}}
2024-01-18T09:11:28+00:00
[]
[ "zh" ]
TAGS #task_categories-question-answering #task_ids-extractive-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Chinese #license-cc-by-sa-4.0 #region-us
Dataset Card for "cmrc2018" =========================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: * Point of Contact: * Size of downloaded dataset files: 11.50 MB * Size of the generated dataset: 22.31 MB * Total amount of disk used: 33.83 MB ### Dataset Summary A Span-Extraction dataset for Chinese machine reading comprehension to add language diversities in this area. The dataset is composed by near 20,000 real questions annotated on Wikipedia paragraphs by human experts. We also annotated a challenge set which contains the questions that need comprehensive understanding and multi-sentence inference throughout the context. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### default * Size of downloaded dataset files: 11.50 MB * Size of the generated dataset: 22.31 MB * Total amount of disk used: 33.83 MB An example of 'validation' looks as follows. ### Data Fields The data fields are the same among all splits. #### default * 'id': a 'string' feature. * 'context': a 'string' feature. * 'question': a 'string' feature. * 'answers': a dictionary feature containing: + 'text': a 'string' feature. + 'answer\_start': a 'int32' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @patrickvonplaten, @mariamabarham, @lewtun, @thomwolf for adding this dataset.
[ "### Dataset Summary\n\n\nA Span-Extraction dataset for Chinese machine reading comprehension to add language\ndiversities in this area. The dataset is composed by near 20,000 real questions annotated\non Wikipedia paragraphs by human experts. We also annotated a challenge set which\ncontains the questions that need comprehensive understanding and multi-sentence\ninference throughout the context.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 11.50 MB\n* Size of the generated dataset: 22.31 MB\n* Total amount of disk used: 33.83 MB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'id': a 'string' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @patrickvonplaten, @mariamabarham, @lewtun, @thomwolf for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_ids-extractive-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Chinese #license-cc-by-sa-4.0 #region-us \n", "### Dataset Summary\n\n\nA Span-Extraction dataset for Chinese machine reading comprehension to add language\ndiversities in this area. The dataset is composed by near 20,000 real questions annotated\non Wikipedia paragraphs by human experts. We also annotated a challenge set which\ncontains the questions that need comprehensive understanding and multi-sentence\ninference throughout the context.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 11.50 MB\n* Size of the generated dataset: 22.31 MB\n* Total amount of disk used: 33.83 MB\n\n\nAn example of 'validation' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'id': a 'string' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @patrickvonplaten, @mariamabarham, @lewtun, @thomwolf for adding this dataset." ]
[ 97, 80, 10, 11, 6, 50, 17, 79, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 34 ]
[ "passage: TAGS\n#task_categories-question-answering #task_ids-extractive-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Chinese #license-cc-by-sa-4.0 #region-us \n### Dataset Summary\n\n\nA Span-Extraction dataset for Chinese machine reading comprehension to add language\ndiversities in this area. The dataset is composed by near 20,000 real questions annotated\non Wikipedia paragraphs by human experts. We also annotated a challenge set which\ncontains the questions that need comprehensive understanding and multi-sentence\ninference throughout the context.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### default\n\n\n* Size of downloaded dataset files: 11.50 MB\n* Size of the generated dataset: 22.31 MB\n* Total amount of disk used: 33.83 MB\n\n\nAn example of 'validation' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### default\n\n\n* 'id': a 'string' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators### Licensing Information### Contributions\n\n\nThanks to @patrickvonplaten, @mariamabarham, @lewtun, @thomwolf for adding this dataset." ]
19796c6fb32020154cb2745d48704fa73e29b17d
# Dataset Card for CMU Document Grounded Conversations ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [CMU Hinglish DoG](http://festvox.org/cedar/data/notyet/) - **Repository:** [CMU Document Grounded Conversations (English version)](https://github.com/festvox/datasets-CMU_DoG) - **Paper:** [CMU Document Grounded Conversations (English version)](https://arxiv.org/pdf/1809.07358.pdf) - **Point of Contact:** ### Dataset Summary This is a collection of text conversations in Hinglish (code mixing between Hindi-English) and their corresponding English versions. Can be used for Translating between the two. The dataset has been provided by Prof. Alan Black's group from CMU. ### Supported Tasks and Leaderboards - `abstractive-mt` ### Languages ## Dataset Structure ### Data Instances A typical data point comprises a Hinglish text, with key `hi_en` and its English version with key `en`. The `docIdx` contains the current section index of the wiki document when the utterance is said. There are in total 4 sections for each document. The `uid` has the user id of this utterance. An example from the CMU_Hinglish_DoG train set looks as follows: ``` {'rating': 2, 'wikiDocumentIdx': 13, 'utcTimestamp': '2018-03-16T17:48:22.037Z', 'uid': 'user2', 'date': '2018-03-16T17:47:21.964Z', 'uid2response': {'response': [1, 2, 3, 5], 'type': 'finish'}, 'uid1LogInTime': '2018-03-16T17:47:21.964Z', 'user2_id': 'USR664', 'uid1LogOutTime': '2018-03-16T18:02:29.072Z', 'whoSawDoc': ['user1', 'user2'], 'status': 1, 'docIdx': 0, 'uid1response': {'response': [1, 2, 3, 4], 'type': 'finish'}, 'translation': {'en': 'The director is Zack Snyder, 27% Rotten Tomatoes, 4.9/10.', 'hi_en': 'Zack Snyder director hai, 27% Rotten Tomatoes, 4.9/10.'}} ``` ### Data Fields - `date`: the time the file is created, as a string - `docIdx`: the current section index of the wiki document when the utterance is said. There are in total 4 sections for each document. - `translation`: - `hi_en`: The text in Hinglish - `en`: The text in English - `uid`: the user id of this utterance. - `utcTimestamp`: the server utc timestamp of this utterance, as a string - `rating`: A number from 1 or 2 or 3. A larger number means the quality of the conversation is better. - `status`: status as an integer - `uid1LogInTime`: optional login time of user 1, as a string - `uid1LogOutTime`: optional logout time of user 1, as a string - `uid1response`: a json object contains the status and response of user after finishing the conversation. Fields in the object includes: - `type`: should be one of ['finish', 'abandon','abandonWithouAnsweringFeedbackQuestion']. 'finish' means the user successfully finishes the conversation, either by completing 12 or 15 turns or in the way that the other user leaves the conversation first. 'abandon' means the user abandons the conversation in the middle, but entering the feedback page. 'abandonWithouAnsweringFeedbackQuestion' means the user just disconnects or closes the web page without providing the feedback. - `response`: the answer to the post-conversation questions. The worker can choose multiple of them. The options presented to the user are as follows: For type 'finish' 1: The conversation is understandable. 2: The other user is actively responding me. 3: The conversation goes smoothly. For type 'abandon' 1: The other user is too rude. 2: I don't know how to proceed with the conversation. 3: The other user is not responding to me. For users given the document 4: I have watched the movie before. 5: I have not watched the movie before. For the users without the document 4: I will watch the movie after the other user's introduction. 5: I will not watch the movie after the other user's introduction. - `uid2response`: same as uid1response - `user2_id`: the generated user id of user 2 - `whoSawDoc`: Should be one of ['user1'], ['user2'], ['user1', 'user2']. Indicating which user read the document. - `wikiDocumentId`: the index of the wiki document. ### Data Splits | name |train|validation|test| |----------|----:|---------:|---:| |CMU DOG | 8060| 942| 960| ## Dataset Creation [More Information Needed] ### Curation Rationale [More Information Needed] ### Source Data The Hinglish dataset is derived from the original CMU DoG (Document Grounded Conversations Dataset). More info about that can be found in the [repo](https://github.com/festvox/datasets-CMU_DoG) #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations [More Information Needed] #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to help develop better question answering systems. ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The dataset was initially created by Prof Alan W Black's group at CMU ### Licensing Information [More Information Needed] ### Citation Information ```bibtex @inproceedings{ cmu_dog_emnlp18, title={A Dataset for Document Grounded Conversations}, author={Zhou, Kangyan and Prabhumoye, Shrimai and Black, Alan W}, year={2018}, booktitle={Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing} } ``` ### Contributions Thanks to [@Ishan-Kumar2](https://github.com/Ishan-Kumar2) for adding this dataset.
cmu_hinglish_dog
[ "task_categories:translation", "annotations_creators:machine-generated", "language_creators:crowdsourced", "multilinguality:multilingual", "multilinguality:translation", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "language:hi", "license:cc-by-sa-3.0", "license:gfdl", "arxiv:1809.07358", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["crowdsourced"], "language": ["en", "hi"], "license": ["cc-by-sa-3.0", "gfdl"], "multilinguality": ["multilingual", "translation"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "CMU Document Grounded Conversations", "dataset_info": {"features": [{"name": "date", "dtype": "string"}, {"name": "docIdx", "dtype": "int64"}, {"name": "translation", "dtype": {"translation": {"languages": ["en", "hi_en"]}}}, {"name": "uid", "dtype": "string"}, {"name": "utcTimestamp", "dtype": "string"}, {"name": "rating", "dtype": "int64"}, {"name": "status", "dtype": "int64"}, {"name": "uid1LogInTime", "dtype": "string"}, {"name": "uid1LogOutTime", "dtype": "string"}, {"name": "uid1response", "struct": [{"name": "response", "sequence": "int64"}, {"name": "type", "dtype": "string"}]}, {"name": "uid2response", "struct": [{"name": "response", "sequence": "int64"}, {"name": "type", "dtype": "string"}]}, {"name": "user2_id", "dtype": "string"}, {"name": "whoSawDoc", "sequence": "string"}, {"name": "wikiDocumentIdx", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 3140818, "num_examples": 8060}, {"name": "test", "num_bytes": 379465, "num_examples": 960}, {"name": "validation", "num_bytes": 368670, "num_examples": 942}], "download_size": 1039828, "dataset_size": 3888953}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}]}
2024-01-18T14:36:48+00:00
[ "1809.07358" ]
[ "en", "hi" ]
TAGS #task_categories-translation #annotations_creators-machine-generated #language_creators-crowdsourced #multilinguality-multilingual #multilinguality-translation #size_categories-1K<n<10K #source_datasets-original #language-English #language-Hindi #license-cc-by-sa-3.0 #license-gfdl #arxiv-1809.07358 #region-us
Dataset Card for CMU Document Grounded Conversations ==================================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: CMU Hinglish DoG * Repository: CMU Document Grounded Conversations (English version) * Paper: CMU Document Grounded Conversations (English version) * Point of Contact: ### Dataset Summary This is a collection of text conversations in Hinglish (code mixing between Hindi-English) and their corresponding English versions. Can be used for Translating between the two. The dataset has been provided by Prof. Alan Black's group from CMU. ### Supported Tasks and Leaderboards * 'abstractive-mt' ### Languages Dataset Structure ----------------- ### Data Instances A typical data point comprises a Hinglish text, with key 'hi\_en' and its English version with key 'en'. The 'docIdx' contains the current section index of the wiki document when the utterance is said. There are in total 4 sections for each document. The 'uid' has the user id of this utterance. An example from the CMU\_Hinglish\_DoG train set looks as follows: ### Data Fields * 'date': the time the file is created, as a string * 'docIdx': the current section index of the wiki document when the utterance is said. There are in total 4 sections for each document. * 'translation': + 'hi\_en': The text in Hinglish + 'en': The text in English * 'uid': the user id of this utterance. * 'utcTimestamp': the server utc timestamp of this utterance, as a string * 'rating': A number from 1 or 2 or 3. A larger number means the quality of the conversation is better. * 'status': status as an integer * 'uid1LogInTime': optional login time of user 1, as a string * 'uid1LogOutTime': optional logout time of user 1, as a string * 'uid1response': a json object contains the status and response of user after finishing the conversation. Fields in the object includes: + 'type': should be one of ['finish', 'abandon','abandonWithouAnsweringFeedbackQuestion']. 'finish' means the user successfully finishes the conversation, either by completing 12 or 15 turns or in the way that the other user leaves the conversation first. 'abandon' means the user abandons the conversation in the middle, but entering the feedback page. 'abandonWithouAnsweringFeedbackQuestion' means the user just disconnects or closes the web page without providing the feedback. + 'response': the answer to the post-conversation questions. The worker can choose multiple of them. The options presented to the user are as follows: For type 'finish' 1: The conversation is understandable. 2: The other user is actively responding me. 3: The conversation goes smoothly. For type 'abandon' 1: The other user is too rude. 2: I don't know how to proceed with the conversation. 3: The other user is not responding to me. For users given the document 4: I have watched the movie before. 5: I have not watched the movie before. For the users without the document 4: I will watch the movie after the other user's introduction. 5: I will not watch the movie after the other user's introduction. * 'uid2response': same as uid1response * 'user2\_id': the generated user id of user 2 * 'whoSawDoc': Should be one of ['user1'], ['user2'], ['user1', 'user2']. Indicating which user read the document. * 'wikiDocumentId': the index of the wiki document. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data The Hinglish dataset is derived from the original CMU DoG (Document Grounded Conversations Dataset). More info about that can be found in the repo #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset The purpose of this dataset is to help develop better question answering systems. ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators The dataset was initially created by Prof Alan W Black's group at CMU ### Licensing Information ### Contributions Thanks to @Ishan-Kumar2 for adding this dataset.
[ "### Dataset Summary\n\n\nThis is a collection of text conversations in Hinglish (code mixing between Hindi-English) and their corresponding English versions. Can be used for Translating between the two. The dataset has been provided by Prof. Alan Black's group from CMU.", "### Supported Tasks and Leaderboards\n\n\n* 'abstractive-mt'", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical data point comprises a Hinglish text, with key 'hi\\_en' and its English version with key 'en'. The 'docIdx' contains the current section index of the wiki document when the utterance is said. There are in total 4 sections for each document. The 'uid' has the user id of this utterance.\n\n\nAn example from the CMU\\_Hinglish\\_DoG train set looks as follows:", "### Data Fields\n\n\n* 'date': the time the file is created, as a string\n* 'docIdx': the current section index of the wiki document when the utterance is said. There are in total 4 sections for each document.\n* 'translation':\n\t+ 'hi\\_en': The text in Hinglish\n\t+ 'en': The text in English\n* 'uid': the user id of this utterance.\n* 'utcTimestamp': the server utc timestamp of this utterance, as a string\n* 'rating': A number from 1 or 2 or 3. A larger number means the quality of the conversation is better.\n* 'status': status as an integer\n* 'uid1LogInTime': optional login time of user 1, as a string\n* 'uid1LogOutTime': optional logout time of user 1, as a string\n* 'uid1response': a json object contains the status and response of user after finishing the conversation. Fields in the object includes:\n\t+ 'type': should be one of ['finish', 'abandon','abandonWithouAnsweringFeedbackQuestion']. 'finish' means the user successfully finishes the conversation, either by completing 12 or 15 turns or in the way that the other user leaves the conversation first. 'abandon' means the user abandons the conversation in the middle, but entering the feedback page. 'abandonWithouAnsweringFeedbackQuestion' means the user just disconnects or closes the web page without providing the feedback.\n\t+ 'response': the answer to the post-conversation questions. The worker can choose multiple of them. The options presented to the user are as follows:\n\tFor type 'finish'\n\t1: The conversation is understandable.\n\t2: The other user is actively responding me.\n\t3: The conversation goes smoothly.\n\tFor type 'abandon'\n\t1: The other user is too rude.\n\t2: I don't know how to proceed with the conversation.\n\t3: The other user is not responding to me.\n\tFor users given the document\n\t4: I have watched the movie before.\n\t5: I have not watched the movie before.\n\tFor the users without the document\n\t4: I will watch the movie after the other user's introduction.\n\t5: I will not watch the movie after the other user's introduction.\n* 'uid2response': same as uid1response\n* 'user2\\_id': the generated user id of user 2\n* 'whoSawDoc': Should be one of ['user1'], ['user2'], ['user1', 'user2']. Indicating which user read the document.\n* 'wikiDocumentId': the index of the wiki document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data\n\n\nThe Hinglish dataset is derived from the original CMU DoG (Document Grounded Conversations Dataset). More info about that can be found in the repo", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nThe purpose of this dataset is to help develop better question answering systems.", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe dataset was initially created by Prof Alan W Black's group at CMU", "### Licensing Information", "### Contributions\n\n\nThanks to @Ishan-Kumar2 for adding this dataset." ]
[ "TAGS\n#task_categories-translation #annotations_creators-machine-generated #language_creators-crowdsourced #multilinguality-multilingual #multilinguality-translation #size_categories-1K<n<10K #source_datasets-original #language-English #language-Hindi #license-cc-by-sa-3.0 #license-gfdl #arxiv-1809.07358 #region-us \n", "### Dataset Summary\n\n\nThis is a collection of text conversations in Hinglish (code mixing between Hindi-English) and their corresponding English versions. Can be used for Translating between the two. The dataset has been provided by Prof. Alan Black's group from CMU.", "### Supported Tasks and Leaderboards\n\n\n* 'abstractive-mt'", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical data point comprises a Hinglish text, with key 'hi\\_en' and its English version with key 'en'. The 'docIdx' contains the current section index of the wiki document when the utterance is said. There are in total 4 sections for each document. The 'uid' has the user id of this utterance.\n\n\nAn example from the CMU\\_Hinglish\\_DoG train set looks as follows:", "### Data Fields\n\n\n* 'date': the time the file is created, as a string\n* 'docIdx': the current section index of the wiki document when the utterance is said. There are in total 4 sections for each document.\n* 'translation':\n\t+ 'hi\\_en': The text in Hinglish\n\t+ 'en': The text in English\n* 'uid': the user id of this utterance.\n* 'utcTimestamp': the server utc timestamp of this utterance, as a string\n* 'rating': A number from 1 or 2 or 3. A larger number means the quality of the conversation is better.\n* 'status': status as an integer\n* 'uid1LogInTime': optional login time of user 1, as a string\n* 'uid1LogOutTime': optional logout time of user 1, as a string\n* 'uid1response': a json object contains the status and response of user after finishing the conversation. Fields in the object includes:\n\t+ 'type': should be one of ['finish', 'abandon','abandonWithouAnsweringFeedbackQuestion']. 'finish' means the user successfully finishes the conversation, either by completing 12 or 15 turns or in the way that the other user leaves the conversation first. 'abandon' means the user abandons the conversation in the middle, but entering the feedback page. 'abandonWithouAnsweringFeedbackQuestion' means the user just disconnects or closes the web page without providing the feedback.\n\t+ 'response': the answer to the post-conversation questions. The worker can choose multiple of them. The options presented to the user are as follows:\n\tFor type 'finish'\n\t1: The conversation is understandable.\n\t2: The other user is actively responding me.\n\t3: The conversation goes smoothly.\n\tFor type 'abandon'\n\t1: The other user is too rude.\n\t2: I don't know how to proceed with the conversation.\n\t3: The other user is not responding to me.\n\tFor users given the document\n\t4: I have watched the movie before.\n\t5: I have not watched the movie before.\n\tFor the users without the document\n\t4: I will watch the movie after the other user's introduction.\n\t5: I will not watch the movie after the other user's introduction.\n* 'uid2response': same as uid1response\n* 'user2\\_id': the generated user id of user 2\n* 'whoSawDoc': Should be one of ['user1'], ['user2'], ['user1', 'user2']. Indicating which user read the document.\n* 'wikiDocumentId': the index of the wiki document.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data\n\n\nThe Hinglish dataset is derived from the original CMU DoG (Document Grounded Conversations Dataset). More info about that can be found in the repo", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nThe purpose of this dataset is to help develop better question answering systems.", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe dataset was initially created by Prof Alan W Black's group at CMU", "### Licensing Information", "### Contributions\n\n\nThanks to @Ishan-Kumar2 for adding this dataset." ]
[ 108, 64, 19, 11, 109, 622, 11, 7, 40, 10, 10, 5, 5, 9, 18, 23, 8, 14, 24, 6, 20 ]
[ "passage: TAGS\n#task_categories-translation #annotations_creators-machine-generated #language_creators-crowdsourced #multilinguality-multilingual #multilinguality-translation #size_categories-1K<n<10K #source_datasets-original #language-English #language-Hindi #license-cc-by-sa-3.0 #license-gfdl #arxiv-1809.07358 #region-us \n### Dataset Summary\n\n\nThis is a collection of text conversations in Hinglish (code mixing between Hindi-English) and their corresponding English versions. Can be used for Translating between the two. The dataset has been provided by Prof. Alan Black's group from CMU.### Supported Tasks and Leaderboards\n\n\n* 'abstractive-mt'### Languages\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nA typical data point comprises a Hinglish text, with key 'hi\\_en' and its English version with key 'en'. The 'docIdx' contains the current section index of the wiki document when the utterance is said. There are in total 4 sections for each document. The 'uid' has the user id of this utterance.\n\n\nAn example from the CMU\\_Hinglish\\_DoG train set looks as follows:" ]
96df5e686bee6baa90b8bee7c28b81fa3fa6223d
# Dataset Card for CNN Dailymail Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** [CNN / DailyMail Dataset repository](https://github.com/abisee/cnn-dailymail) - **Paper:** [Abstractive Text Summarization Using Sequence-to-Sequence RNNs and Beyond](https://papers.nips.cc/paper/5945-teaching-machines-to-read-and-comprehend.pdf), [Get To The Point: Summarization with Pointer-Generator Networks](https://www.aclweb.org/anthology/K16-1028.pdf) - **Leaderboard:** [Papers with Code leaderboard for CNN / Dailymail Dataset](https://paperswithcode.com/sota/document-summarization-on-cnn-daily-mail) - **Point of Contact:** [Abigail See](mailto:abisee@stanford.edu) ### Dataset Summary The CNN / DailyMail Dataset is an English-language dataset containing just over 300k unique news articles as written by journalists at CNN and the Daily Mail. The current version supports both extractive and abstractive summarization, though the original version was created for machine reading and comprehension and abstractive question answering. ### Supported Tasks and Leaderboards - 'summarization': [Versions 2.0.0 and 3.0.0 of the CNN / DailyMail Dataset](https://www.aclweb.org/anthology/K16-1028.pdf) can be used to train a model for abstractive and extractive summarization ([Version 1.0.0](https://papers.nips.cc/paper/5945-teaching-machines-to-read-and-comprehend.pdf) was developed for machine reading and comprehension and abstractive question answering). The model performance is measured by how high the output summary's [ROUGE](https://huggingface.co/metrics/rouge) score for a given article is when compared to the highlight as written by the original article author. [Zhong et al (2020)](https://www.aclweb.org/anthology/2020.acl-main.552.pdf) report a ROUGE-1 score of 44.41 when testing a model trained for extractive summarization. See the [Papers With Code leaderboard](https://paperswithcode.com/sota/document-summarization-on-cnn-daily-mail) for more models. ### Languages The BCP-47 code for English as generally spoken in the United States is en-US and the BCP-47 code for English as generally spoken in the United Kingdom is en-GB. It is unknown if other varieties of English are represented in the data. ## Dataset Structure ### Data Instances For each instance, there is a string for the article, a string for the highlights, and a string for the id. See the [CNN / Daily Mail dataset viewer](https://huggingface.co/datasets/viewer/?dataset=cnn_dailymail&config=3.0.0) to explore more examples. ``` {'id': '0054d6d30dbcad772e20b22771153a2a9cbeaf62', 'article': '(CNN) -- An American woman died aboard a cruise ship that docked at Rio de Janeiro on Tuesday, the same ship on which 86 passengers previously fell ill, according to the state-run Brazilian news agency, Agencia Brasil. The American tourist died aboard the MS Veendam, owned by cruise operator Holland America. Federal Police told Agencia Brasil that forensic doctors were investigating her death. The ship's doctors told police that the woman was elderly and suffered from diabetes and hypertension, according the agency. The other passengers came down with diarrhea prior to her death during an earlier part of the trip, the ship's doctors said. The Veendam left New York 36 days ago for a South America tour.' 'highlights': 'The elderly woman suffered from diabetes and hypertension, ship's doctors say .\nPreviously, 86 passengers had fallen ill on the ship, Agencia Brasil says .'} ``` The average token count for the articles and the highlights are provided below: | Feature | Mean Token Count | | ---------- | ---------------- | | Article | 781 | | Highlights | 56 | ### Data Fields - `id`: a string containing the heximal formated SHA1 hash of the url where the story was retrieved from - `article`: a string containing the body of the news article - `highlights`: a string containing the highlight of the article as written by the article author ### Data Splits The CNN/DailyMail dataset has 3 splits: _train_, _validation_, and _test_. Below are the statistics for Version 3.0.0 of the dataset. | Dataset Split | Number of Instances in Split | | ------------- | ------------------------------------------- | | Train | 287,113 | | Validation | 13,368 | | Test | 11,490 | ## Dataset Creation ### Curation Rationale Version 1.0.0 aimed to support supervised neural methodologies for machine reading and question answering with a large amount of real natural language training data and released about 313k unique articles and nearly 1M Cloze style questions to go with the articles. Versions 2.0.0 and 3.0.0 changed the structure of the dataset to support summarization rather than question answering. Version 3.0.0 provided a non-anonymized version of the data, whereas both the previous versions were preprocessed to replace named entities with unique identifier labels. ### Source Data #### Initial Data Collection and Normalization The data consists of news articles and highlight sentences. In the question answering setting of the data, the articles are used as the context and entities are hidden one at a time in the highlight sentences, producing Cloze style questions where the goal of the model is to correctly guess which entity in the context has been hidden in the highlight. In the summarization setting, the highlight sentences are concatenated to form a summary of the article. The CNN articles were written between April 2007 and April 2015. The Daily Mail articles were written between June 2010 and April 2015. The code for the original data collection is available at <https://github.com/deepmind/rc-data>. The articles were downloaded using archives of <www.cnn.com> and <www.dailymail.co.uk> on the Wayback Machine. Articles were not included in the Version 1.0.0 collection if they exceeded 2000 tokens. Due to accessibility issues with the Wayback Machine, Kyunghyun Cho has made the datasets available at <https://cs.nyu.edu/~kcho/DMQA/>. An updated version of the code that does not anonymize the data is available at <https://github.com/abisee/cnn-dailymail>. Hermann et al provided their own tokenization script. The script provided by See uses the PTBTokenizer. It also lowercases the text and adds periods to lines missing them. #### Who are the source language producers? The text was written by journalists at CNN and the Daily Mail. ### Annotations The dataset does not contain any additional annotations. #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information Version 3.0 is not anonymized, so individuals' names can be found in the dataset. Information about the original author is not included in the dataset. ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to help develop models that can summarize long paragraphs of text in one or two sentences. This task is useful for efficiently presenting information given a large quantity of text. It should be made clear that any summarizations produced by models trained on this dataset are reflective of the language used in the articles, but are in fact automatically generated. ### Discussion of Biases [Bordia and Bowman (2019)](https://www.aclweb.org/anthology/N19-3002.pdf) explore measuring gender bias and debiasing techniques in the CNN / Dailymail dataset, the Penn Treebank, and WikiText-2. They find the CNN / Dailymail dataset to have a slightly lower gender bias based on their metric compared to the other datasets, but still show evidence of gender bias when looking at words such as 'fragile'. Because the articles were written by and for people in the US and the UK, they will likely present specifically US and UK perspectives and feature events that are considered relevant to those populations during the time that the articles were published. ### Other Known Limitations News articles have been shown to conform to writing conventions in which important information is primarily presented in the first third of the article [(Kryściński et al, 2019)](https://www.aclweb.org/anthology/D19-1051.pdf). [Chen et al (2016)](https://www.aclweb.org/anthology/P16-1223.pdf) conducted a manual study of 100 random instances of the first version of the dataset and found 25% of the samples to be difficult even for humans to answer correctly due to ambiguity and coreference errors. It should also be noted that machine-generated summarizations, even when extractive, may differ in truth values when compared to the original articles. ## Additional Information ### Dataset Curators The data was originally collected by Karl Moritz Hermann, Tomáš Kočiský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom of Google DeepMind. Tomáš Kočiský and Phil Blunsom are also affiliated with the University of Oxford. They released scripts to collect and process the data into the question answering format. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, and Bing Xiang of IMB Watson and Çağlar Gu̇lçehre of Université de Montréal modified Hermann et al's collection scripts to restore the data to a summary format. They also produced both anonymized and non-anonymized versions. The code for the non-anonymized version is made publicly available by Abigail See of Stanford University, Peter J. Liu of Google Brain and Christopher D. Manning of Stanford University at <https://github.com/abisee/cnn-dailymail>. The work at Stanford University was supported by the DARPA DEFT ProgramAFRL contract no. FA8750-13-2-0040. ### Licensing Information The CNN / Daily Mail dataset version 1.0.0 is released under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0). ### Citation Information ``` @inproceedings{see-etal-2017-get, title = "Get To The Point: Summarization with Pointer-Generator Networks", author = "See, Abigail and Liu, Peter J. and Manning, Christopher D.", booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2017", address = "Vancouver, Canada", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/P17-1099", doi = "10.18653/v1/P17-1099", pages = "1073--1083", abstract = "Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text). However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves. In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways. First, we use a hybrid pointer-generator network that can copy words from the source text via pointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through the generator. Second, we use coverage to keep track of what has been summarized, which discourages repetition. We apply our model to the CNN / Daily Mail summarization task, outperforming the current abstractive state-of-the-art by at least 2 ROUGE points.", } ``` ``` @inproceedings{DBLP:conf/nips/HermannKGEKSB15, author={Karl Moritz Hermann and Tomás Kociský and Edward Grefenstette and Lasse Espeholt and Will Kay and Mustafa Suleyman and Phil Blunsom}, title={Teaching Machines to Read and Comprehend}, year={2015}, cdate={1420070400000}, pages={1693-1701}, url={http://papers.nips.cc/paper/5945-teaching-machines-to-read-and-comprehend}, booktitle={NIPS}, crossref={conf/nips/2015} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@jplu](https://github.com/jplu), [@jbragg](https://github.com/jbragg), [@patrickvonplaten](https://github.com/patrickvonplaten) and [@mcmillanmajora](https://github.com/mcmillanmajora) for adding this dataset.
cnn_dailymail
[ "task_categories:summarization", "task_ids:news-articles-summarization", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:apache-2.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["summarization"], "task_ids": ["news-articles-summarization"], "paperswithcode_id": "cnn-daily-mail-1", "pretty_name": "CNN / Daily Mail", "dataset_info": [{"config_name": "1.0.0", "features": [{"name": "article", "dtype": "string"}, {"name": "highlights", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1261703785, "num_examples": 287113}, {"name": "validation", "num_bytes": 57732412, "num_examples": 13368}, {"name": "test", "num_bytes": 49925732, "num_examples": 11490}], "download_size": 836927248, "dataset_size": 1369361929}, {"config_name": "2.0.0", "features": [{"name": "article", "dtype": "string"}, {"name": "highlights", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1261703785, "num_examples": 287113}, {"name": "validation", "num_bytes": 57732412, "num_examples": 13368}, {"name": "test", "num_bytes": 49925732, "num_examples": 11490}], "download_size": 837094602, "dataset_size": 1369361929}, {"config_name": "3.0.0", "features": [{"name": "article", "dtype": "string"}, {"name": "highlights", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1261703785, "num_examples": 287113}, {"name": "validation", "num_bytes": 57732412, "num_examples": 13368}, {"name": "test", "num_bytes": 49925732, "num_examples": 11490}], "download_size": 837094602, "dataset_size": 1369361929}], "configs": [{"config_name": "1.0.0", "data_files": [{"split": "train", "path": "1.0.0/train-*"}, {"split": "validation", "path": "1.0.0/validation-*"}, {"split": "test", "path": "1.0.0/test-*"}]}, {"config_name": "2.0.0", "data_files": [{"split": "train", "path": "2.0.0/train-*"}, {"split": "validation", "path": "2.0.0/validation-*"}, {"split": "test", "path": "2.0.0/test-*"}]}, {"config_name": "3.0.0", "data_files": [{"split": "train", "path": "3.0.0/train-*"}, {"split": "validation", "path": "3.0.0/validation-*"}, {"split": "test", "path": "3.0.0/test-*"}]}], "train-eval-index": [{"config": "3.0.0", "task": "summarization", "task_id": "summarization", "splits": {"eval_split": "test"}, "col_mapping": {"article": "text", "highlights": "target"}}]}
2024-01-18T15:31:34+00:00
[]
[ "en" ]
TAGS #task_categories-summarization #task_ids-news-articles-summarization #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-apache-2.0 #region-us
Dataset Card for CNN Dailymail Dataset ====================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: * Repository: CNN / DailyMail Dataset repository * Paper: Abstractive Text Summarization Using Sequence-to-Sequence RNNs and Beyond, Get To The Point: Summarization with Pointer-Generator Networks * Leaderboard: Papers with Code leaderboard for CNN / Dailymail Dataset * Point of Contact: Abigail See ### Dataset Summary The CNN / DailyMail Dataset is an English-language dataset containing just over 300k unique news articles as written by journalists at CNN and the Daily Mail. The current version supports both extractive and abstractive summarization, though the original version was created for machine reading and comprehension and abstractive question answering. ### Supported Tasks and Leaderboards * 'summarization': Versions 2.0.0 and 3.0.0 of the CNN / DailyMail Dataset can be used to train a model for abstractive and extractive summarization (Version 1.0.0 was developed for machine reading and comprehension and abstractive question answering). The model performance is measured by how high the output summary's ROUGE score for a given article is when compared to the highlight as written by the original article author. Zhong et al (2020) report a ROUGE-1 score of 44.41 when testing a model trained for extractive summarization. See the Papers With Code leaderboard for more models. ### Languages The BCP-47 code for English as generally spoken in the United States is en-US and the BCP-47 code for English as generally spoken in the United Kingdom is en-GB. It is unknown if other varieties of English are represented in the data. Dataset Structure ----------------- ### Data Instances For each instance, there is a string for the article, a string for the highlights, and a string for the id. See the CNN / Daily Mail dataset viewer to explore more examples. The average token count for the articles and the highlights are provided below: ### Data Fields * 'id': a string containing the heximal formated SHA1 hash of the url where the story was retrieved from * 'article': a string containing the body of the news article * 'highlights': a string containing the highlight of the article as written by the article author ### Data Splits The CNN/DailyMail dataset has 3 splits: *train*, *validation*, and *test*. Below are the statistics for Version 3.0.0 of the dataset. Dataset Creation ---------------- ### Curation Rationale Version 1.0.0 aimed to support supervised neural methodologies for machine reading and question answering with a large amount of real natural language training data and released about 313k unique articles and nearly 1M Cloze style questions to go with the articles. Versions 2.0.0 and 3.0.0 changed the structure of the dataset to support summarization rather than question answering. Version 3.0.0 provided a non-anonymized version of the data, whereas both the previous versions were preprocessed to replace named entities with unique identifier labels. ### Source Data #### Initial Data Collection and Normalization The data consists of news articles and highlight sentences. In the question answering setting of the data, the articles are used as the context and entities are hidden one at a time in the highlight sentences, producing Cloze style questions where the goal of the model is to correctly guess which entity in the context has been hidden in the highlight. In the summarization setting, the highlight sentences are concatenated to form a summary of the article. The CNN articles were written between April 2007 and April 2015. The Daily Mail articles were written between June 2010 and April 2015. The code for the original data collection is available at <URL The articles were downloaded using archives of and on the Wayback Machine. Articles were not included in the Version 1.0.0 collection if they exceeded 2000 tokens. Due to accessibility issues with the Wayback Machine, Kyunghyun Cho has made the datasets available at <URL An updated version of the code that does not anonymize the data is available at <URL Hermann et al provided their own tokenization script. The script provided by See uses the PTBTokenizer. It also lowercases the text and adds periods to lines missing them. #### Who are the source language producers? The text was written by journalists at CNN and the Daily Mail. ### Annotations The dataset does not contain any additional annotations. #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information Version 3.0 is not anonymized, so individuals' names can be found in the dataset. Information about the original author is not included in the dataset. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset The purpose of this dataset is to help develop models that can summarize long paragraphs of text in one or two sentences. This task is useful for efficiently presenting information given a large quantity of text. It should be made clear that any summarizations produced by models trained on this dataset are reflective of the language used in the articles, but are in fact automatically generated. ### Discussion of Biases Bordia and Bowman (2019) explore measuring gender bias and debiasing techniques in the CNN / Dailymail dataset, the Penn Treebank, and WikiText-2. They find the CNN / Dailymail dataset to have a slightly lower gender bias based on their metric compared to the other datasets, but still show evidence of gender bias when looking at words such as 'fragile'. Because the articles were written by and for people in the US and the UK, they will likely present specifically US and UK perspectives and feature events that are considered relevant to those populations during the time that the articles were published. ### Other Known Limitations News articles have been shown to conform to writing conventions in which important information is primarily presented in the first third of the article (Kryściński et al, 2019). Chen et al (2016) conducted a manual study of 100 random instances of the first version of the dataset and found 25% of the samples to be difficult even for humans to answer correctly due to ambiguity and coreference errors. It should also be noted that machine-generated summarizations, even when extractive, may differ in truth values when compared to the original articles. Additional Information ---------------------- ### Dataset Curators The data was originally collected by Karl Moritz Hermann, Tomáš Kočiský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom of Google DeepMind. Tomáš Kočiský and Phil Blunsom are also affiliated with the University of Oxford. They released scripts to collect and process the data into the question answering format. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, and Bing Xiang of IMB Watson and Çağlar Gu̇lçehre of Université de Montréal modified Hermann et al's collection scripts to restore the data to a summary format. They also produced both anonymized and non-anonymized versions. The code for the non-anonymized version is made publicly available by Abigail See of Stanford University, Peter J. Liu of Google Brain and Christopher D. Manning of Stanford University at <URL The work at Stanford University was supported by the DARPA DEFT ProgramAFRL contract no. FA8750-13-2-0040. ### Licensing Information The CNN / Daily Mail dataset version 1.0.0 is released under the Apache-2.0 License. ### Contributions Thanks to @thomwolf, @lewtun, @jplu, @jbragg, @patrickvonplaten and @mcmillanmajora for adding this dataset.
[ "### Dataset Summary\n\n\nThe CNN / DailyMail Dataset is an English-language dataset containing just over 300k unique news articles as written by journalists at CNN and the Daily Mail. The current version supports both extractive and abstractive summarization, though the original version was created for machine reading and comprehension and abstractive question answering.", "### Supported Tasks and Leaderboards\n\n\n* 'summarization': Versions 2.0.0 and 3.0.0 of the CNN / DailyMail Dataset can be used to train a model for abstractive and extractive summarization (Version 1.0.0 was developed for machine reading and comprehension and abstractive question answering). The model performance is measured by how high the output summary's ROUGE score for a given article is when compared to the highlight as written by the original article author. Zhong et al (2020) report a ROUGE-1 score of 44.41 when testing a model trained for extractive summarization. See the Papers With Code leaderboard for more models.", "### Languages\n\n\nThe BCP-47 code for English as generally spoken in the United States is en-US and the BCP-47 code for English as generally spoken in the United Kingdom is en-GB. It is unknown if other varieties of English are represented in the data.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nFor each instance, there is a string for the article, a string for the highlights, and a string for the id. See the CNN / Daily Mail dataset viewer to explore more examples.\n\n\nThe average token count for the articles and the highlights are provided below:", "### Data Fields\n\n\n* 'id': a string containing the heximal formated SHA1 hash of the url where the story was retrieved from\n* 'article': a string containing the body of the news article\n* 'highlights': a string containing the highlight of the article as written by the article author", "### Data Splits\n\n\nThe CNN/DailyMail dataset has 3 splits: *train*, *validation*, and *test*. Below are the statistics for Version 3.0.0 of the dataset.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nVersion 1.0.0 aimed to support supervised neural methodologies for machine reading and question answering with a large amount of real natural language training data and released about 313k unique articles and nearly 1M Cloze style questions to go with the articles. Versions 2.0.0 and 3.0.0 changed the structure of the dataset to support summarization rather than question answering. Version 3.0.0 provided a non-anonymized version of the data, whereas both the previous versions were preprocessed to replace named entities with unique identifier labels.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe data consists of news articles and highlight sentences. In the question answering setting of the data, the articles are used as the context and entities are hidden one at a time in the highlight sentences, producing Cloze style questions where the goal of the model is to correctly guess which entity in the context has been hidden in the highlight. In the summarization setting, the highlight sentences are concatenated to form a summary of the article. The CNN articles were written between April 2007 and April 2015. The Daily Mail articles were written between June 2010 and April 2015.\n\n\nThe code for the original data collection is available at <URL The articles were downloaded using archives of and on the Wayback Machine. Articles were not included in the Version 1.0.0 collection if they exceeded 2000 tokens. Due to accessibility issues with the Wayback Machine, Kyunghyun Cho has made the datasets available at <URL An updated version of the code that does not anonymize the data is available at <URL\n\n\nHermann et al provided their own tokenization script. The script provided by See uses the PTBTokenizer. It also lowercases the text and adds periods to lines missing them.", "#### Who are the source language producers?\n\n\nThe text was written by journalists at CNN and the Daily Mail.", "### Annotations\n\n\nThe dataset does not contain any additional annotations.", "#### Annotation process\n\n\n[N/A]", "#### Who are the annotators?\n\n\n[N/A]", "### Personal and Sensitive Information\n\n\nVersion 3.0 is not anonymized, so individuals' names can be found in the dataset. Information about the original author is not included in the dataset.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nThe purpose of this dataset is to help develop models that can summarize long paragraphs of text in one or two sentences.\n\n\nThis task is useful for efficiently presenting information given a large quantity of text. It should be made clear that any summarizations produced by models trained on this dataset are reflective of the language used in the articles, but are in fact automatically generated.", "### Discussion of Biases\n\n\nBordia and Bowman (2019) explore measuring gender bias and debiasing techniques in the CNN / Dailymail dataset, the Penn Treebank, and WikiText-2. They find the CNN / Dailymail dataset to have a slightly lower gender bias based on their metric compared to the other datasets, but still show evidence of gender bias when looking at words such as 'fragile'.\n\n\nBecause the articles were written by and for people in the US and the UK, they will likely present specifically US and UK perspectives and feature events that are considered relevant to those populations during the time that the articles were published.", "### Other Known Limitations\n\n\nNews articles have been shown to conform to writing conventions in which important information is primarily presented in the first third of the article (Kryściński et al, 2019). Chen et al (2016) conducted a manual study of 100 random instances of the first version of the dataset and found 25% of the samples to be difficult even for humans to answer correctly due to ambiguity and coreference errors.\n\n\nIt should also be noted that machine-generated summarizations, even when extractive, may differ in truth values when compared to the original articles.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe data was originally collected by Karl Moritz Hermann, Tomáš Kočiský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom of Google DeepMind. Tomáš Kočiský and Phil Blunsom are also affiliated with the University of Oxford. They released scripts to collect and process the data into the question answering format.\n\n\nRamesh Nallapati, Bowen Zhou, Cicero dos Santos, and Bing Xiang of IMB Watson and Çağlar Gu̇lçehre of Université de Montréal modified Hermann et al's collection scripts to restore the data to a summary format. They also produced both anonymized and non-anonymized versions.\n\n\nThe code for the non-anonymized version is made publicly available by Abigail See of Stanford University, Peter J. Liu of Google Brain and Christopher D. Manning of Stanford University at <URL The work at Stanford University was supported by the DARPA DEFT ProgramAFRL contract no. FA8750-13-2-0040.", "### Licensing Information\n\n\nThe CNN / Daily Mail dataset version 1.0.0 is released under the Apache-2.0 License.", "### Contributions\n\n\nThanks to @thomwolf, @lewtun, @jplu, @jbragg, @patrickvonplaten and @mcmillanmajora for adding this dataset." ]
[ "TAGS\n#task_categories-summarization #task_ids-news-articles-summarization #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-apache-2.0 #region-us \n", "### Dataset Summary\n\n\nThe CNN / DailyMail Dataset is an English-language dataset containing just over 300k unique news articles as written by journalists at CNN and the Daily Mail. The current version supports both extractive and abstractive summarization, though the original version was created for machine reading and comprehension and abstractive question answering.", "### Supported Tasks and Leaderboards\n\n\n* 'summarization': Versions 2.0.0 and 3.0.0 of the CNN / DailyMail Dataset can be used to train a model for abstractive and extractive summarization (Version 1.0.0 was developed for machine reading and comprehension and abstractive question answering). The model performance is measured by how high the output summary's ROUGE score for a given article is when compared to the highlight as written by the original article author. Zhong et al (2020) report a ROUGE-1 score of 44.41 when testing a model trained for extractive summarization. See the Papers With Code leaderboard for more models.", "### Languages\n\n\nThe BCP-47 code for English as generally spoken in the United States is en-US and the BCP-47 code for English as generally spoken in the United Kingdom is en-GB. It is unknown if other varieties of English are represented in the data.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nFor each instance, there is a string for the article, a string for the highlights, and a string for the id. See the CNN / Daily Mail dataset viewer to explore more examples.\n\n\nThe average token count for the articles and the highlights are provided below:", "### Data Fields\n\n\n* 'id': a string containing the heximal formated SHA1 hash of the url where the story was retrieved from\n* 'article': a string containing the body of the news article\n* 'highlights': a string containing the highlight of the article as written by the article author", "### Data Splits\n\n\nThe CNN/DailyMail dataset has 3 splits: *train*, *validation*, and *test*. Below are the statistics for Version 3.0.0 of the dataset.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nVersion 1.0.0 aimed to support supervised neural methodologies for machine reading and question answering with a large amount of real natural language training data and released about 313k unique articles and nearly 1M Cloze style questions to go with the articles. Versions 2.0.0 and 3.0.0 changed the structure of the dataset to support summarization rather than question answering. Version 3.0.0 provided a non-anonymized version of the data, whereas both the previous versions were preprocessed to replace named entities with unique identifier labels.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe data consists of news articles and highlight sentences. In the question answering setting of the data, the articles are used as the context and entities are hidden one at a time in the highlight sentences, producing Cloze style questions where the goal of the model is to correctly guess which entity in the context has been hidden in the highlight. In the summarization setting, the highlight sentences are concatenated to form a summary of the article. The CNN articles were written between April 2007 and April 2015. The Daily Mail articles were written between June 2010 and April 2015.\n\n\nThe code for the original data collection is available at <URL The articles were downloaded using archives of and on the Wayback Machine. Articles were not included in the Version 1.0.0 collection if they exceeded 2000 tokens. Due to accessibility issues with the Wayback Machine, Kyunghyun Cho has made the datasets available at <URL An updated version of the code that does not anonymize the data is available at <URL\n\n\nHermann et al provided their own tokenization script. The script provided by See uses the PTBTokenizer. It also lowercases the text and adds periods to lines missing them.", "#### Who are the source language producers?\n\n\nThe text was written by journalists at CNN and the Daily Mail.", "### Annotations\n\n\nThe dataset does not contain any additional annotations.", "#### Annotation process\n\n\n[N/A]", "#### Who are the annotators?\n\n\n[N/A]", "### Personal and Sensitive Information\n\n\nVersion 3.0 is not anonymized, so individuals' names can be found in the dataset. Information about the original author is not included in the dataset.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nThe purpose of this dataset is to help develop models that can summarize long paragraphs of text in one or two sentences.\n\n\nThis task is useful for efficiently presenting information given a large quantity of text. It should be made clear that any summarizations produced by models trained on this dataset are reflective of the language used in the articles, but are in fact automatically generated.", "### Discussion of Biases\n\n\nBordia and Bowman (2019) explore measuring gender bias and debiasing techniques in the CNN / Dailymail dataset, the Penn Treebank, and WikiText-2. They find the CNN / Dailymail dataset to have a slightly lower gender bias based on their metric compared to the other datasets, but still show evidence of gender bias when looking at words such as 'fragile'.\n\n\nBecause the articles were written by and for people in the US and the UK, they will likely present specifically US and UK perspectives and feature events that are considered relevant to those populations during the time that the articles were published.", "### Other Known Limitations\n\n\nNews articles have been shown to conform to writing conventions in which important information is primarily presented in the first third of the article (Kryściński et al, 2019). Chen et al (2016) conducted a manual study of 100 random instances of the first version of the dataset and found 25% of the samples to be difficult even for humans to answer correctly due to ambiguity and coreference errors.\n\n\nIt should also be noted that machine-generated summarizations, even when extractive, may differ in truth values when compared to the original articles.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThe data was originally collected by Karl Moritz Hermann, Tomáš Kočiský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom of Google DeepMind. Tomáš Kočiský and Phil Blunsom are also affiliated with the University of Oxford. They released scripts to collect and process the data into the question answering format.\n\n\nRamesh Nallapati, Bowen Zhou, Cicero dos Santos, and Bing Xiang of IMB Watson and Çağlar Gu̇lçehre of Université de Montréal modified Hermann et al's collection scripts to restore the data to a summary format. They also produced both anonymized and non-anonymized versions.\n\n\nThe code for the non-anonymized version is made publicly available by Abigail See of Stanford University, Peter J. Liu of Google Brain and Christopher D. Manning of Stanford University at <URL The work at Stanford University was supported by the DARPA DEFT ProgramAFRL contract no. FA8750-13-2-0040.", "### Licensing Information\n\n\nThe CNN / Daily Mail dataset version 1.0.0 is released under the Apache-2.0 License.", "### Contributions\n\n\nThanks to @thomwolf, @lewtun, @jplu, @jbragg, @patrickvonplaten and @mcmillanmajora for adding this dataset." ]
[ 91, 76, 147, 69, 64, 73, 54, 126, 4, 266, 24, 17, 10, 14, 50, 91, 142, 133, 234, 26, 45 ]
[ "passage: TAGS\n#task_categories-summarization #task_ids-news-articles-summarization #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-apache-2.0 #region-us \n### Dataset Summary\n\n\nThe CNN / DailyMail Dataset is an English-language dataset containing just over 300k unique news articles as written by journalists at CNN and the Daily Mail. The current version supports both extractive and abstractive summarization, though the original version was created for machine reading and comprehension and abstractive question answering.### Supported Tasks and Leaderboards\n\n\n* 'summarization': Versions 2.0.0 and 3.0.0 of the CNN / DailyMail Dataset can be used to train a model for abstractive and extractive summarization (Version 1.0.0 was developed for machine reading and comprehension and abstractive question answering). The model performance is measured by how high the output summary's ROUGE score for a given article is when compared to the highlight as written by the original article author. Zhong et al (2020) report a ROUGE-1 score of 44.41 when testing a model trained for extractive summarization. See the Papers With Code leaderboard for more models.### Languages\n\n\nThe BCP-47 code for English as generally spoken in the United States is en-US and the BCP-47 code for English as generally spoken in the United Kingdom is en-GB. It is unknown if other varieties of English are represented in the data.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nFor each instance, there is a string for the article, a string for the highlights, and a string for the id. See the CNN / Daily Mail dataset viewer to explore more examples.\n\n\nThe average token count for the articles and the highlights are provided below:", "passage: ### Data Fields\n\n\n* 'id': a string containing the heximal formated SHA1 hash of the url where the story was retrieved from\n* 'article': a string containing the body of the news article\n* 'highlights': a string containing the highlight of the article as written by the article author### Data Splits\n\n\nThe CNN/DailyMail dataset has 3 splits: *train*, *validation*, and *test*. Below are the statistics for Version 3.0.0 of the dataset.\n\n\n\nDataset Creation\n----------------### Curation Rationale\n\n\nVersion 1.0.0 aimed to support supervised neural methodologies for machine reading and question answering with a large amount of real natural language training data and released about 313k unique articles and nearly 1M Cloze style questions to go with the articles. Versions 2.0.0 and 3.0.0 changed the structure of the dataset to support summarization rather than question answering. Version 3.0.0 provided a non-anonymized version of the data, whereas both the previous versions were preprocessed to replace named entities with unique identifier labels.### Source Data#### Initial Data Collection and Normalization\n\n\nThe data consists of news articles and highlight sentences. In the question answering setting of the data, the articles are used as the context and entities are hidden one at a time in the highlight sentences, producing Cloze style questions where the goal of the model is to correctly guess which entity in the context has been hidden in the highlight. In the summarization setting, the highlight sentences are concatenated to form a summary of the article. The CNN articles were written between April 2007 and April 2015. The Daily Mail articles were written between June 2010 and April 2015.\n\n\nThe code for the original data collection is available at <URL The articles were downloaded using archives of and on the Wayback Machine. Articles were not included in the Version 1.0.0 collection if they exceeded 2000 tokens. Due to accessibility issues with the Wayback Machine, Kyunghyun Cho has made the datasets available at <URL An updated version of the code that does not anonymize the data is available at <URL\n\n\nHermann et al provided their own tokenization script. The script provided by See uses the PTBTokenizer. It also lowercases the text and adds periods to lines missing them.#### Who are the source language producers?\n\n\nThe text was written by journalists at CNN and the Daily Mail.### Annotations\n\n\nThe dataset does not contain any additional annotations.#### Annotation process\n\n\n[N/A]", "passage: #### Who are the annotators?\n\n\n[N/A]### Personal and Sensitive Information\n\n\nVersion 3.0 is not anonymized, so individuals' names can be found in the dataset. Information about the original author is not included in the dataset.\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset\n\n\nThe purpose of this dataset is to help develop models that can summarize long paragraphs of text in one or two sentences.\n\n\nThis task is useful for efficiently presenting information given a large quantity of text. It should be made clear that any summarizations produced by models trained on this dataset are reflective of the language used in the articles, but are in fact automatically generated.### Discussion of Biases\n\n\nBordia and Bowman (2019) explore measuring gender bias and debiasing techniques in the CNN / Dailymail dataset, the Penn Treebank, and WikiText-2. They find the CNN / Dailymail dataset to have a slightly lower gender bias based on their metric compared to the other datasets, but still show evidence of gender bias when looking at words such as 'fragile'.\n\n\nBecause the articles were written by and for people in the US and the UK, they will likely present specifically US and UK perspectives and feature events that are considered relevant to those populations during the time that the articles were published.### Other Known Limitations\n\n\nNews articles have been shown to conform to writing conventions in which important information is primarily presented in the first third of the article (Kryściński et al, 2019). Chen et al (2016) conducted a manual study of 100 random instances of the first version of the dataset and found 25% of the samples to be difficult even for humans to answer correctly due to ambiguity and coreference errors.\n\n\nIt should also be noted that machine-generated summarizations, even when extractive, may differ in truth values when compared to the original articles.\n\n\nAdditional Information\n----------------------" ]
c5a050c88eea9927dc6b914184b1c2b2d031cd07
# Dataset Card for Coached Conversational Preference Elicitation ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Coached Conversational Preference Elicitation Homepage](https://research.google/tools/datasets/coached-conversational-preference-elicitation/) - **Repository:** [Coached Conversational Preference Elicitation Repository](https://github.com/google-research-datasets/ccpe) - **Paper:** [Aclweb](https://www.aclweb.org/anthology/W19-5941/) ### Dataset Summary A dataset consisting of 502 English dialogs with 12,000 annotated utterances between a user and an assistant discussing movie preferences in natural language. It was collected using a Wizard-of-Oz methodology between two paid crowd-workers, where one worker plays the role of an 'assistant', while the other plays the role of a 'user'. The 'assistant' elicits the 'user’s' preferences about movies following a Coached Conversational Preference Elicitation (CCPE) method. The assistant asks questions designed to minimize the bias in the terminology the 'user' employs to convey his or her preferences as much as possible, and to obtain these preferences in natural language. Each dialog is annotated with entity mentions, preferences expressed about entities, descriptions of entities provided, and other statements of entities. ### Supported Tasks and Leaderboards * `other-other-Conversational Recommendation`: The dataset can be used to train a model for Conversational recommendation, which consists in Coached Conversation Preference Elicitation. ### Languages The text in the dataset is in English. The associated BCP-47 code is `en`. ## Dataset Structure ### Data Instances A typical data point comprises of a series of utterances between the 'assistant' and the 'user'. Each such utterance is annotated into categories mentioned in data fields. An example from the Coached Conversational Preference Elicitation dataset looks as follows: ``` {'conversationId': 'CCPE-6faee', 'utterances': {'index': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 'segments': [{'annotations': [{'annotationType': [], 'entityType': []}], 'endIndex': [0], 'startIndex': [0], 'text': ['']}, {'annotations': [{'annotationType': [0], 'entityType': [0]}, {'annotationType': [1], 'entityType': [0]}], 'endIndex': [20, 27], 'startIndex': [14, 0], 'text': ['comedy', 'I really like comedy movies']}, {'annotations': [{'annotationType': [0], 'entityType': [0]}], 'endIndex': [24], 'startIndex': [16], 'text': ['comedies']}, {'annotations': [{'annotationType': [1], 'entityType': [0]}], 'endIndex': [15], 'startIndex': [0], 'text': ['I love to laugh']}, {'annotations': [{'annotationType': [], 'entityType': []}], 'endIndex': [0], 'startIndex': [0], 'text': ['']}, {'annotations': [{'annotationType': [0], 'entityType': [1]}, {'annotationType': [1], 'entityType': [1]}], 'endIndex': [21, 21], 'startIndex': [8, 0], 'text': ['Step Brothers', 'I liked Step Brothers']}, {'annotations': [{'annotationType': [], 'entityType': []}], 'endIndex': [0], 'startIndex': [0], 'text': ['']}, {'annotations': [{'annotationType': [1], 'entityType': [1]}], 'endIndex': [32], 'startIndex': [0], 'text': ['Had some amazing one-liners that']}, {'annotations': [{'annotationType': [], 'entityType': []}], 'endIndex': [0], 'startIndex': [0], 'text': ['']}, {'annotations': [{'annotationType': [0], 'entityType': [1]}, {'annotationType': [1], 'entityType': [1]}], 'endIndex': [15, 15], 'startIndex': [13, 0], 'text': ['RV', "I don't like RV"]}, {'annotations': [{'annotationType': [], 'entityType': []}], 'endIndex': [0], 'startIndex': [0], 'text': ['']}, {'annotations': [{'annotationType': [1], 'entityType': [1]}, {'annotationType': [1], 'entityType': [1]}], 'endIndex': [48, 66], 'startIndex': [18, 50], 'text': ['It was just so slow and boring', "I didn't like it"]}, {'annotations': [{'annotationType': [0], 'entityType': [1]}], 'endIndex': [63], 'startIndex': [33], 'text': ['Jurassic World: Fallen Kingdom']}, {'annotations': [{'annotationType': [0], 'entityType': [1]}, {'annotationType': [3], 'entityType': [1]}], 'endIndex': [52, 52], 'startIndex': [22, 0], 'text': ['Jurassic World: Fallen Kingdom', 'I have seen the movie Jurassic World: Fallen Kingdom']}, {'annotations': [{'annotationType': [], 'entityType': []}], 'endIndex': [0], 'startIndex': [0], 'text': ['']}, {'annotations': [{'annotationType': [1], 'entityType': [1]}, {'annotationType': [1], 'entityType': [1]}, {'annotationType': [1], 'entityType': [1]}], 'endIndex': [24, 125, 161], 'startIndex': [0, 95, 135], 'text': ['I really like the actors', 'I just really like the scenery', 'the dinosaurs were awesome']}], 'speaker': [1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0], 'text': ['What kinds of movies do you like?', 'I really like comedy movies.', 'Why do you like comedies?', "I love to laugh and comedy movies, that's their whole purpose. Make you laugh.", 'Alright, how about a movie you liked?', 'I liked Step Brothers.', 'Why did you like that movie?', 'Had some amazing one-liners that still get used today even though the movie was made awhile ago.', 'Well, is there a movie you did not like?', "I don't like RV.", 'Why not?', "And I just didn't It was just so slow and boring. I didn't like it.", 'Ok, then have you seen the movie Jurassic World: Fallen Kingdom', 'I have seen the movie Jurassic World: Fallen Kingdom.', 'What is it about these kinds of movies that you like or dislike?', 'I really like the actors. I feel like they were doing their best to make the movie better. And I just really like the scenery, and the the dinosaurs were awesome.']}} ``` ### Data Fields Each conversation has the following fields: * `conversationId`: A unique random ID for the conversation. The ID has no meaning. * `utterances`: An array of utterances by the workers. Each utterance has the following fields: * `index`: A 0-based index indicating the order of the utterances in the conversation. * `speaker`: Either USER or ASSISTANT, indicating which role generated this utterance. * `text`: The raw text as written by the ASSISTANT, or transcribed from the spoken recording of USER. * `segments`: An array of semantic annotations of spans in the text. Each semantic annotation segment has the following fields: * `startIndex`: The position of the start of the annotation in the utterance text. * `endIndex`: The position of the end of the annotation in the utterance text. * `text`: The raw text that has been annotated. * `annotations`: An array of annotation details for this segment. Each annotation has two fields: * `annotationType`: The class of annotation (see ontology below). * `entityType`: The class of the entity to which the text refers (see ontology below). **EXPLANATION OF ONTOLOGY** In the corpus, preferences and the entities that these preferences refer to are annotated with an annotation type as well as an entity type. Annotation types fall into four categories: * `ENTITY_NAME` (0): These mark the names of relevant entities mentioned. * `ENTITY_PREFERENCE` (1): These are defined as statements indicating that the dialog participant does or does not like the relevant entity in general, or that they do or do not like some aspect of the entity. This may also be thought of the participant having some sentiment about what is being discussed. * `ENTITY_DESCRIPTION` (2): Neutral descriptions that describe an entity but do not convey an explicit liking or disliking. * `ENTITY_OTHER` (3): Other relevant statements about an entity that convey relevant information of how the participant relates to the entity but do not provide a sentiment. Most often, these relate to whether a participant has seen a particular movie, or knows a lot about a given entity. Entity types are marked as belonging to one of four categories: * `MOVIE_GENRE_OR_CATEGORY` (0): For genres or general descriptions that capture a particular type or style of movie. * `MOVIE_OR_SERIES` (1): For the full or partial name of a movie or series of movies. * `PERSON` (2): For the full or partial name of an actual person. * `SOMETHING_ELSE ` (3): For other important proper nouns, such as the names of characters or locations. ### Data Splits There is a single split of the dataset named 'train' which contains the whole datset. | | Train | | ------------------- | ----- | | Input Conversations | 502 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [Creative Commons Attribution 4.0 License](https://creativecommons.org/licenses/by/4.0/) ### Citation Information ``` @inproceedings{radlinski-etal-2019-ccpe, title = {Coached Conversational Preference Elicitation: A Case Study in Understanding Movie Preferences}, author = {Filip Radlinski and Krisztian Balog and Bill Byrne and Karthik Krishnamoorthi}, booktitle = {Proceedings of the Annual Meeting of the Special Interest Group on Discourse and Dialogue ({SIGDIAL})}, year = 2019 } ``` ### Contributions Thanks to [@vineeths96](https://github.com/vineeths96) for adding this dataset.
coached_conv_pref
[ "task_categories:other", "task_categories:text-generation", "task_categories:fill-mask", "task_categories:token-classification", "task_ids:dialogue-modeling", "task_ids:parsing", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "Conversational Recommendation", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["other", "text-generation", "fill-mask", "token-classification"], "task_ids": ["dialogue-modeling", "parsing"], "paperswithcode_id": "coached-conversational-preference-elicitation", "pretty_name": "Coached Conversational Preference Elicitation", "tags": ["Conversational Recommendation"], "dataset_info": {"features": [{"name": "conversationId", "dtype": "string"}, {"name": "utterances", "sequence": [{"name": "index", "dtype": "int32"}, {"name": "speaker", "dtype": {"class_label": {"names": {"0": "USER", "1": "ASSISTANT"}}}}, {"name": "text", "dtype": "string"}, {"name": "segments", "sequence": [{"name": "startIndex", "dtype": "int32"}, {"name": "endIndex", "dtype": "int32"}, {"name": "text", "dtype": "string"}, {"name": "annotations", "sequence": [{"name": "annotationType", "dtype": {"class_label": {"names": {"0": "ENTITY_NAME", "1": "ENTITY_PREFERENCE", "2": "ENTITY_DESCRIPTION", "3": "ENTITY_OTHER"}}}}, {"name": "entityType", "dtype": {"class_label": {"names": {"0": "MOVIE_GENRE_OR_CATEGORY", "1": "MOVIE_OR_SERIES", "2": "PERSON", "3": "SOMETHING_ELSE"}}}}]}]}]}], "config_name": "coached_conv_pref", "splits": [{"name": "train", "num_bytes": 2295579, "num_examples": 502}], "download_size": 5191959, "dataset_size": 2295579}}
2024-01-18T09:16:22+00:00
[]
[ "en" ]
TAGS #task_categories-other #task_categories-text-generation #task_categories-fill-mask #task_categories-token-classification #task_ids-dialogue-modeling #task_ids-parsing #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-cc-by-sa-4.0 #Conversational Recommendation #region-us
Dataset Card for Coached Conversational Preference Elicitation ============================================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: Coached Conversational Preference Elicitation Homepage * Repository: Coached Conversational Preference Elicitation Repository * Paper: Aclweb ### Dataset Summary A dataset consisting of 502 English dialogs with 12,000 annotated utterances between a user and an assistant discussing movie preferences in natural language. It was collected using a Wizard-of-Oz methodology between two paid crowd-workers, where one worker plays the role of an 'assistant', while the other plays the role of a 'user'. The 'assistant' elicits the 'user’s' preferences about movies following a Coached Conversational Preference Elicitation (CCPE) method. The assistant asks questions designed to minimize the bias in the terminology the 'user' employs to convey his or her preferences as much as possible, and to obtain these preferences in natural language. Each dialog is annotated with entity mentions, preferences expressed about entities, descriptions of entities provided, and other statements of entities. ### Supported Tasks and Leaderboards * 'other-other-Conversational Recommendation': The dataset can be used to train a model for Conversational recommendation, which consists in Coached Conversation Preference Elicitation. ### Languages The text in the dataset is in English. The associated BCP-47 code is 'en'. Dataset Structure ----------------- ### Data Instances A typical data point comprises of a series of utterances between the 'assistant' and the 'user'. Each such utterance is annotated into categories mentioned in data fields. An example from the Coached Conversational Preference Elicitation dataset looks as follows: ### Data Fields Each conversation has the following fields: * 'conversationId': A unique random ID for the conversation. The ID has no meaning. * 'utterances': An array of utterances by the workers. Each utterance has the following fields: * 'index': A 0-based index indicating the order of the utterances in the conversation. * 'speaker': Either USER or ASSISTANT, indicating which role generated this utterance. * 'text': The raw text as written by the ASSISTANT, or transcribed from the spoken recording of USER. * 'segments': An array of semantic annotations of spans in the text. Each semantic annotation segment has the following fields: * 'startIndex': The position of the start of the annotation in the utterance text. * 'endIndex': The position of the end of the annotation in the utterance text. * 'text': The raw text that has been annotated. * 'annotations': An array of annotation details for this segment. Each annotation has two fields: * 'annotationType': The class of annotation (see ontology below). * 'entityType': The class of the entity to which the text refers (see ontology below). EXPLANATION OF ONTOLOGY In the corpus, preferences and the entities that these preferences refer to are annotated with an annotation type as well as an entity type. Annotation types fall into four categories: * 'ENTITY\_NAME' (0): These mark the names of relevant entities mentioned. * 'ENTITY\_PREFERENCE' (1): These are defined as statements indicating that the dialog participant does or does not like the relevant entity in general, or that they do or do not like some aspect of the entity. This may also be thought of the participant having some sentiment about what is being discussed. * 'ENTITY\_DESCRIPTION' (2): Neutral descriptions that describe an entity but do not convey an explicit liking or disliking. * 'ENTITY\_OTHER' (3): Other relevant statements about an entity that convey relevant information of how the participant relates to the entity but do not provide a sentiment. Most often, these relate to whether a participant has seen a particular movie, or knows a lot about a given entity. Entity types are marked as belonging to one of four categories: * 'MOVIE\_GENRE\_OR\_CATEGORY' (0): For genres or general descriptions that capture a particular type or style of movie. * 'MOVIE\_OR\_SERIES' (1): For the full or partial name of a movie or series of movies. * 'PERSON' (2): For the full or partial name of an actual person. * 'SOMETHING\_ELSE ' (3): For other important proper nouns, such as the names of characters or locations. ### Data Splits There is a single split of the dataset named 'train' which contains the whole datset. Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Creative Commons Attribution 4.0 License ### Contributions Thanks to @vineeths96 for adding this dataset.
[ "### Dataset Summary\n\n\nA dataset consisting of 502 English dialogs with 12,000 annotated utterances between a user and an assistant discussing movie preferences in natural language. It was collected using a Wizard-of-Oz methodology between two paid crowd-workers, where one worker plays the role of an 'assistant', while the other plays the role of a 'user'. The 'assistant' elicits the 'user’s' preferences about movies following a Coached Conversational Preference Elicitation (CCPE) method. The assistant asks questions designed to minimize the bias in the terminology the 'user' employs to convey his or her preferences as much as possible, and to obtain these preferences in natural language. Each dialog is annotated with entity mentions, preferences expressed about entities, descriptions of entities provided, and other statements of entities.", "### Supported Tasks and Leaderboards\n\n\n* 'other-other-Conversational Recommendation': The dataset can be used to train a model for Conversational recommendation, which consists in Coached Conversation Preference Elicitation.", "### Languages\n\n\nThe text in the dataset is in English. The associated BCP-47 code is 'en'.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical data point comprises of a series of utterances between the 'assistant' and the 'user'. Each such utterance is annotated into categories mentioned in data fields.\n\n\nAn example from the Coached Conversational Preference Elicitation dataset looks as follows:", "### Data Fields\n\n\nEach conversation has the following fields:\n\n\n* 'conversationId': A unique random ID for the conversation. The ID has no meaning.\n* 'utterances': An array of utterances by the workers.\n\n\nEach utterance has the following fields:\n\n\n* 'index': A 0-based index indicating the order of the utterances in the conversation.\n* 'speaker': Either USER or ASSISTANT, indicating which role generated this utterance.\n* 'text': The raw text as written by the ASSISTANT, or transcribed from the spoken recording of USER.\n* 'segments': An array of semantic annotations of spans in the text.\n\n\nEach semantic annotation segment has the following fields:\n\n\n* 'startIndex': The position of the start of the annotation in the utterance text.\n* 'endIndex': The position of the end of the annotation in the utterance text.\n* 'text': The raw text that has been annotated.\n* 'annotations': An array of annotation details for this segment.\n\n\nEach annotation has two fields:\n\n\n* 'annotationType': The class of annotation (see ontology below).\n* 'entityType': The class of the entity to which the text refers (see ontology below).\n\n\nEXPLANATION OF ONTOLOGY\n\n\nIn the corpus, preferences and the entities that these preferences refer to are annotated with an annotation type as well as an entity type.\n\n\nAnnotation types fall into four categories:\n\n\n* 'ENTITY\\_NAME' (0): These mark the names of relevant entities mentioned.\n* 'ENTITY\\_PREFERENCE' (1): These are defined as statements indicating that the dialog participant does or does not like the relevant entity in general, or that they do or do not like some aspect of the entity. This may also be thought of the participant having some sentiment about what is being discussed.\n* 'ENTITY\\_DESCRIPTION' (2): Neutral descriptions that describe an entity but do not convey an explicit liking or disliking.\n* 'ENTITY\\_OTHER' (3): Other relevant statements about an entity that convey relevant information of how the participant relates to the entity but do not provide a sentiment. Most often, these relate to whether a participant has seen a particular movie, or knows a lot about a given entity.\n\n\nEntity types are marked as belonging to one of four categories:\n\n\n* 'MOVIE\\_GENRE\\_OR\\_CATEGORY' (0): For genres or general descriptions that capture a particular type or style of movie.\n* 'MOVIE\\_OR\\_SERIES' (1): For the full or partial name of a movie or series of movies.\n* 'PERSON' (2): For the full or partial name of an actual person.\n* 'SOMETHING\\_ELSE ' (3): For other important proper nouns, such as the names of characters or locations.", "### Data Splits\n\n\nThere is a single split of the dataset named 'train' which contains the whole datset.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCreative Commons Attribution 4.0 License", "### Contributions\n\n\nThanks to @vineeths96 for adding this dataset." ]
[ "TAGS\n#task_categories-other #task_categories-text-generation #task_categories-fill-mask #task_categories-token-classification #task_ids-dialogue-modeling #task_ids-parsing #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-cc-by-sa-4.0 #Conversational Recommendation #region-us \n", "### Dataset Summary\n\n\nA dataset consisting of 502 English dialogs with 12,000 annotated utterances between a user and an assistant discussing movie preferences in natural language. It was collected using a Wizard-of-Oz methodology between two paid crowd-workers, where one worker plays the role of an 'assistant', while the other plays the role of a 'user'. The 'assistant' elicits the 'user’s' preferences about movies following a Coached Conversational Preference Elicitation (CCPE) method. The assistant asks questions designed to minimize the bias in the terminology the 'user' employs to convey his or her preferences as much as possible, and to obtain these preferences in natural language. Each dialog is annotated with entity mentions, preferences expressed about entities, descriptions of entities provided, and other statements of entities.", "### Supported Tasks and Leaderboards\n\n\n* 'other-other-Conversational Recommendation': The dataset can be used to train a model for Conversational recommendation, which consists in Coached Conversation Preference Elicitation.", "### Languages\n\n\nThe text in the dataset is in English. The associated BCP-47 code is 'en'.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA typical data point comprises of a series of utterances between the 'assistant' and the 'user'. Each such utterance is annotated into categories mentioned in data fields.\n\n\nAn example from the Coached Conversational Preference Elicitation dataset looks as follows:", "### Data Fields\n\n\nEach conversation has the following fields:\n\n\n* 'conversationId': A unique random ID for the conversation. The ID has no meaning.\n* 'utterances': An array of utterances by the workers.\n\n\nEach utterance has the following fields:\n\n\n* 'index': A 0-based index indicating the order of the utterances in the conversation.\n* 'speaker': Either USER or ASSISTANT, indicating which role generated this utterance.\n* 'text': The raw text as written by the ASSISTANT, or transcribed from the spoken recording of USER.\n* 'segments': An array of semantic annotations of spans in the text.\n\n\nEach semantic annotation segment has the following fields:\n\n\n* 'startIndex': The position of the start of the annotation in the utterance text.\n* 'endIndex': The position of the end of the annotation in the utterance text.\n* 'text': The raw text that has been annotated.\n* 'annotations': An array of annotation details for this segment.\n\n\nEach annotation has two fields:\n\n\n* 'annotationType': The class of annotation (see ontology below).\n* 'entityType': The class of the entity to which the text refers (see ontology below).\n\n\nEXPLANATION OF ONTOLOGY\n\n\nIn the corpus, preferences and the entities that these preferences refer to are annotated with an annotation type as well as an entity type.\n\n\nAnnotation types fall into four categories:\n\n\n* 'ENTITY\\_NAME' (0): These mark the names of relevant entities mentioned.\n* 'ENTITY\\_PREFERENCE' (1): These are defined as statements indicating that the dialog participant does or does not like the relevant entity in general, or that they do or do not like some aspect of the entity. This may also be thought of the participant having some sentiment about what is being discussed.\n* 'ENTITY\\_DESCRIPTION' (2): Neutral descriptions that describe an entity but do not convey an explicit liking or disliking.\n* 'ENTITY\\_OTHER' (3): Other relevant statements about an entity that convey relevant information of how the participant relates to the entity but do not provide a sentiment. Most often, these relate to whether a participant has seen a particular movie, or knows a lot about a given entity.\n\n\nEntity types are marked as belonging to one of four categories:\n\n\n* 'MOVIE\\_GENRE\\_OR\\_CATEGORY' (0): For genres or general descriptions that capture a particular type or style of movie.\n* 'MOVIE\\_OR\\_SERIES' (1): For the full or partial name of a movie or series of movies.\n* 'PERSON' (2): For the full or partial name of an actual person.\n* 'SOMETHING\\_ELSE ' (3): For other important proper nouns, such as the names of characters or locations.", "### Data Splits\n\n\nThere is a single split of the dataset named 'train' which contains the whole datset.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nCreative Commons Attribution 4.0 License", "### Contributions\n\n\nThanks to @vineeths96 for adding this dataset." ]
[ 138, 207, 57, 32, 70, 678, 34, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 11, 18 ]
[ "passage: TAGS\n#task_categories-other #task_categories-text-generation #task_categories-fill-mask #task_categories-token-classification #task_ids-dialogue-modeling #task_ids-parsing #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-cc-by-sa-4.0 #Conversational Recommendation #region-us \n### Dataset Summary\n\n\nA dataset consisting of 502 English dialogs with 12,000 annotated utterances between a user and an assistant discussing movie preferences in natural language. It was collected using a Wizard-of-Oz methodology between two paid crowd-workers, where one worker plays the role of an 'assistant', while the other plays the role of a 'user'. The 'assistant' elicits the 'user’s' preferences about movies following a Coached Conversational Preference Elicitation (CCPE) method. The assistant asks questions designed to minimize the bias in the terminology the 'user' employs to convey his or her preferences as much as possible, and to obtain these preferences in natural language. Each dialog is annotated with entity mentions, preferences expressed about entities, descriptions of entities provided, and other statements of entities.### Supported Tasks and Leaderboards\n\n\n* 'other-other-Conversational Recommendation': The dataset can be used to train a model for Conversational recommendation, which consists in Coached Conversation Preference Elicitation.### Languages\n\n\nThe text in the dataset is in English. The associated BCP-47 code is 'en'.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nA typical data point comprises of a series of utterances between the 'assistant' and the 'user'. Each such utterance is annotated into categories mentioned in data fields.\n\n\nAn example from the Coached Conversational Preference Elicitation dataset looks as follows:" ]