PereLluis13
commited on
Commit
•
13aed1f
1
Parent(s):
beb1d6c
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,194 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
annotations_creators:
|
3 |
+
- machine-generated
|
4 |
+
language_creators:
|
5 |
+
- machine-generated
|
6 |
+
languages:
|
7 |
+
- en
|
8 |
+
licenses:
|
9 |
+
- cc-by-nc-sa-4.0
|
10 |
+
multilinguality:
|
11 |
+
- monolingual
|
12 |
+
pretty_name: rebel-dataset
|
13 |
+
size_categories:
|
14 |
+
- unknown
|
15 |
+
source_datasets:
|
16 |
+
- original
|
17 |
+
task_categories:
|
18 |
+
- text-retrieval
|
19 |
+
- conditional-text-generation
|
20 |
+
task_ids:
|
21 |
+
- text-retrieval-other-relation-extraction
|
22 |
+
---
|
23 |
+
|
24 |
+
# Dataset Card for REBEL dataset
|
25 |
+
|
26 |
+
## Table of Contents
|
27 |
+
- [Dataset Card for REBEL dataset](#dataset-card-for-rebel)
|
28 |
+
- [Table of Contents](#table-of-contents)
|
29 |
+
- [Dataset Description](#dataset-description)
|
30 |
+
- [Dataset Summary](#dataset-summary)
|
31 |
+
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
32 |
+
- [Languages](#languages)
|
33 |
+
- [Dataset Structure](#dataset-structure)
|
34 |
+
- [Data Instances](#data-instances)
|
35 |
+
- [Data Fields](#data-fields)
|
36 |
+
- [Data Splits](#data-splits)
|
37 |
+
- [Dataset Creation](#dataset-creation)
|
38 |
+
- [Curation Rationale](#curation-rationale)
|
39 |
+
- [Source Data](#source-data)
|
40 |
+
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
|
41 |
+
- [Who are the source language producers?](#who-are-the-source-language-producers)
|
42 |
+
- [Annotations](#annotations)
|
43 |
+
- [Annotation process](#annotation-process)
|
44 |
+
- [Who are the annotators?](#who-are-the-annotators)
|
45 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
46 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
47 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
48 |
+
- [Discussion of Biases](#discussion-of-biases)
|
49 |
+
- [Other Known Limitations](#other-known-limitations)
|
50 |
+
- [Additional Information](#additional-information)
|
51 |
+
- [Dataset Curators](#dataset-curators)
|
52 |
+
- [Licensing Information](#licensing-information)
|
53 |
+
- [Citation Information](#citation-information)
|
54 |
+
- [Contributions](#contributions)
|
55 |
+
|
56 |
+
## Dataset Description
|
57 |
+
|
58 |
+
- **Repository:** [https://github.com/Babelscape/rebel](https://github.com/Babelscape/rebel)
|
59 |
+
- **Paper:** [https://github.com/Babelscape/rebel/blob/main/docs/EMNLP_2021_REBEL__Camera_Ready_.pdf](https://github.com/Babelscape/rebel/blob/main/docs/EMNLP_2021_REBEL__Camera_Ready_.pdf)
|
60 |
+
- **Point of Contact:** [huguetcabot@babelscape.com](huguetcabot@babelscape.com)
|
61 |
+
|
62 |
+
### Dataset Summary
|
63 |
+
|
64 |
+
Dataset created for [REBEL](https://huggingface.co/Babelscape/rebel-large) dataset from interlinking Wikidata and Wikipedia for Relation Extraction, filtered using NLI.
|
65 |
+
|
66 |
+
### Supported Tasks and Leaderboards
|
67 |
+
|
68 |
+
- `text-retrieval-other-relation-extraction`: The dataset can be used to train a model for Relation Extraction, which consists in extracting triplets from raw text, made of subject, object and relation type. Success on this task is typically measured by achieving a *high/low* [F1](https://huggingface.co/metrics/F1). The [BART](https://huggingface.co/transformers/model_doc/bart.html)) model currently achieves the following score. *[IF A LEADERBOARD IS AVAILABLE]:* 74.
|
69 |
+
|
70 |
+
### Languages
|
71 |
+
|
72 |
+
The dataset is in English, from the English Wikipedia.
|
73 |
+
|
74 |
+
## Dataset Structure
|
75 |
+
|
76 |
+
### Data Instances
|
77 |
+
|
78 |
+
Provide an JSON-formatted example and brief description of a typical instance in the dataset. If available, provide a link to further examples.
|
79 |
+
|
80 |
+
```
|
81 |
+
{
|
82 |
+
'example_field': ...,
|
83 |
+
...
|
84 |
+
}
|
85 |
+
```
|
86 |
+
|
87 |
+
Provide any additional information that is not covered in the other sections about the data here. In particular describe any relationships between data points and if these relationships are made explicit.
|
88 |
+
|
89 |
+
### Data Fields
|
90 |
+
|
91 |
+
List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.
|
92 |
+
|
93 |
+
- `example_field`: description of `example_field`
|
94 |
+
|
95 |
+
Note that the descriptions can be initialized with the **Show Markdown Data Fields** output of the [tagging app](https://github.com/huggingface/datasets-tagging), you will then only need to refine the generated descriptions.
|
96 |
+
|
97 |
+
### Data Splits
|
98 |
+
|
99 |
+
Describe and name the splits in the dataset if there are more than one.
|
100 |
+
|
101 |
+
Describe any criteria for splitting the data, if used. If their are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.
|
102 |
+
|
103 |
+
Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example:
|
104 |
+
|
105 |
+
| | Tain | Valid | Test |
|
106 |
+
| ----- | ------ | ----- | ---- |
|
107 |
+
| Input Sentences | | | |
|
108 |
+
| Average Sentence Length | | | |
|
109 |
+
|
110 |
+
## Dataset Creation
|
111 |
+
|
112 |
+
### Curation Rationale
|
113 |
+
|
114 |
+
This dataset was created to enable the training of a BART based model as pre-training phase for Relation Extraction as seen in the paper [REBEL: Relation Extraction By End-to-end Language generation](https://github.com/Babelscape/rebel/blob/main/docs/EMNLP_2021_REBEL__Camera_Ready_.pdf).
|
115 |
+
|
116 |
+
### Source Data
|
117 |
+
|
118 |
+
Data comes from Wikipedia text before the table of contents, as well as Wikidata for the triplets annotation.
|
119 |
+
|
120 |
+
#### Initial Data Collection and Normalization
|
121 |
+
|
122 |
+
For the data collection, the dataset extraction pipeline [cRocoDiLe: Automati**c** **R**elati**o**n Extra**c**ti**o**n **D**ataset w**i**th N**L**I filt**e**ring](https://github.com/Babelscape/crocodile) insipired by [T-REx Pipeline](https://github.com/hadyelsahar/RE-NLG-Dataset) more details found at: [T-REx Website](https://hadyelsahar.github.io/t-rex/). The starting point is a Wikipedia dump as well as a Wikidata one.
|
123 |
+
|
124 |
+
After the triplets are extracted, an NLI system was used to filter out those not entailed by the text.
|
125 |
+
|
126 |
+
#### Who are the source language producers?
|
127 |
+
|
128 |
+
Any Wikipedia and Wikidata contributor.
|
129 |
+
|
130 |
+
### Annotations
|
131 |
+
|
132 |
+
|
133 |
+
|
134 |
+
#### Annotation process
|
135 |
+
|
136 |
+
TThe dataset extraction pipeline [cRocoDiLe: Automati**c** **R**elati**o**n Extra**c**ti**o**n **D**ataset w**i**th N**L**I filt**e**ring](https://github.com/Babelscape/crocodile).
|
137 |
+
|
138 |
+
#### Who are the annotators?
|
139 |
+
|
140 |
+
Automatic annottations
|
141 |
+
|
142 |
+
### Personal and Sensitive Information
|
143 |
+
|
144 |
+
All text is from Wikipedia, any Personal or Sensitive Information there may be present in this dataset.
|
145 |
+
|
146 |
+
## Considerations for Using the Data
|
147 |
+
|
148 |
+
### Social Impact of Dataset
|
149 |
+
|
150 |
+
The dataset serves as a pre-training step for Relation Extraction models. It is distantly annotated, hence it should only be used as such. A model trained solely on this dataset may produce allucinations coming from the silver nature of the dataset.
|
151 |
+
|
152 |
+
### Discussion of Biases
|
153 |
+
|
154 |
+
Since the dataset was automatically created from Wikipedia and Wikidata, it may reflect the biases withing those sources.
|
155 |
+
|
156 |
+
For Wikipedia text, see for example [Dinan et al 2020 on biases in Wikipedia (esp. Table 1)](https://arxiv.org/abs/2005.00614), or [Blodgett et al 2020](https://www.aclweb.org/anthology/2020.acl-main.485/) for a more general discussion of the topic.
|
157 |
+
|
158 |
+
For Wikidata, there are class imbalances, also resulting from Wikipedia.
|
159 |
+
|
160 |
+
### Other Known Limitations
|
161 |
+
|
162 |
+
Not for now
|
163 |
+
|
164 |
+
## Additional Information
|
165 |
+
|
166 |
+
### Dataset Curators
|
167 |
+
|
168 |
+
Pere-Lluis Huguet Cabot - Babelscape and Sapienza University of Rome, Italy
|
169 |
+
Roberto Navigli - Sapienza University of Rome, Italy
|
170 |
+
|
171 |
+
### Licensing Information
|
172 |
+
|
173 |
+
Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
|
174 |
+
|
175 |
+
### Citation Information
|
176 |
+
|
177 |
+
Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:
|
178 |
+
```
|
179 |
+
@inproceedings{huguet-cabot-navigli-2021-rebel,
|
180 |
+
title = "REBEL: Relation Extraction By End-to-end Language generation",
|
181 |
+
author = "Huguet Cabot, Pere-Llu{\'\i}s and
|
182 |
+
Navigli, Roberto",
|
183 |
+
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
|
184 |
+
month = nov,
|
185 |
+
year = "2021",
|
186 |
+
address = "Online and in the Barceló Bávaro Convention Centre, Punta Cana, Dominican Republic",
|
187 |
+
publisher = "Association for Computational Linguistics",
|
188 |
+
url = "https://github.com/Babelscape/rebel/blob/main/docs/EMNLP_2021_REBEL__Camera_Ready_.pdf",
|
189 |
+
}
|
190 |
+
```
|
191 |
+
|
192 |
+
### Contributions
|
193 |
+
|
194 |
+
Thanks to [@littlepea13](https://github.com/LittlePea13) for adding this dataset.
|