Datasets:
Tasks:
Text Classification
Sub-tasks:
multi-class-classification
Languages:
English
Size:
100K<n<1M
ArXiv:
Tags:
relation extraction
License:
Update README.md
Browse files
README.md
CHANGED
@@ -59,11 +59,16 @@ The TAC Relation Extraction Dataset (TACRED) is a large-scale relation extractio
|
|
59 |
and org:members) or are labeled as no_relation if no defined relation is held. These examples are created by combining available human annotations from the TAC
|
60 |
KBP challenges and crowdsourcing. Please see [Stanford's EMNLP paper](https://nlp.stanford.edu/pubs/zhang2017tacred.pdf), or their [EMNLP slides](https://nlp.stanford.edu/projects/tacred/files/position-emnlp2017.pdf) for full details.
|
61 |
|
62 |
-
Note:
|
|
|
63 |
the original version released in 2017. For more details on this new version, see the [TACRED Revisited paper](https://aclanthology.org/2020.acl-main.142/)
|
64 |
published at ACL 2020.
|
|
|
|
|
|
|
65 |
|
66 |
-
This repository provides
|
|
|
67 |
|
68 |
### Supported Tasks and Leaderboards
|
69 |
- **Tasks:** Relation Classification
|
@@ -117,6 +122,7 @@ To miminize dataset bias, TACRED is stratified across years in which the TAC KBP
|
|
117 |
| | Train | Dev | Test |
|
118 |
| ----- | ------ | ----- | ---- |
|
119 |
| TACRED | 68,124 (TAC KBP 2009-2012) | 22,631 (TAC KBP 2013) | 15,509 (TAC KBP 2014) |
|
|
|
120 |
## Dataset Creation
|
121 |
### Curation Rationale
|
122 |
[More Information Needed]
|
@@ -164,7 +170,7 @@ The original dataset:
|
|
164 |
}
|
165 |
```
|
166 |
|
167 |
-
For the revised version, please also cite:
|
168 |
```
|
169 |
@inproceedings{alt-etal-2020-tacred,
|
170 |
title = "{TACRED} Revisited: A Thorough Evaluation of the {TACRED} Relation Extraction Task",
|
@@ -181,5 +187,25 @@ For the revised version, please also cite:
|
|
181 |
pages = "1558--1569",
|
182 |
}
|
183 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
184 |
### Contributions
|
185 |
-
Thanks to [@dfki-nlp](https://github.com/dfki-nlp) for adding this dataset.
|
|
|
59 |
and org:members) or are labeled as no_relation if no defined relation is held. These examples are created by combining available human annotations from the TAC
|
60 |
KBP challenges and crowdsourcing. Please see [Stanford's EMNLP paper](https://nlp.stanford.edu/pubs/zhang2017tacred.pdf), or their [EMNLP slides](https://nlp.stanford.edu/projects/tacred/files/position-emnlp2017.pdf) for full details.
|
61 |
|
62 |
+
Note:
|
63 |
+
- There is currently a [label-corrected version](https://github.com/DFKI-NLP/tacrev) of the TACRED dataset, which you should consider using instead of
|
64 |
the original version released in 2017. For more details on this new version, see the [TACRED Revisited paper](https://aclanthology.org/2020.acl-main.142/)
|
65 |
published at ACL 2020.
|
66 |
+
- There is also a [relabeled and pruned version](https://github.com/gstoica27/Re-TACRED) of the TACRED dataset.
|
67 |
+
For more details on this new version, see the [Re-TACRED paper](https://arxiv.org/abs/2104.08398)
|
68 |
+
published at ACL 2020.
|
69 |
|
70 |
+
This repository provides all three versions of the dataset as BuilderConfigs - `'original'`, `'revisited'` and `'re-tacred'`.
|
71 |
+
Simply set the `name` parameter in the `load_dataset` method in order to choose a specific version. The original TACRED is loaded per default.
|
72 |
|
73 |
### Supported Tasks and Leaderboards
|
74 |
- **Tasks:** Relation Classification
|
|
|
122 |
| | Train | Dev | Test |
|
123 |
| ----- | ------ | ----- | ---- |
|
124 |
| TACRED | 68,124 (TAC KBP 2009-2012) | 22,631 (TAC KBP 2013) | 15,509 (TAC KBP 2014) |
|
125 |
+
| Re-TACRED | 58,465 (TAC KBP 2009-2012) | 19,584 (TAC KBP 2013) | 13,418 (TAC KBP 2014) |
|
126 |
## Dataset Creation
|
127 |
### Curation Rationale
|
128 |
[More Information Needed]
|
|
|
170 |
}
|
171 |
```
|
172 |
|
173 |
+
For the revised version (`"revisited"`), please also cite:
|
174 |
```
|
175 |
@inproceedings{alt-etal-2020-tacred,
|
176 |
title = "{TACRED} Revisited: A Thorough Evaluation of the {TACRED} Relation Extraction Task",
|
|
|
187 |
pages = "1558--1569",
|
188 |
}
|
189 |
```
|
190 |
+
|
191 |
+
For the relabeled version (`"re-tacred"`), please also cite:
|
192 |
+
```
|
193 |
+
@article{stoica2021re,
|
194 |
+
author = {George Stoica and
|
195 |
+
Emmanouil Antonios Platanios and
|
196 |
+
Barnab{\'{a}}s P{\'{o}}czos},
|
197 |
+
title = {Re-TACRED: Addressing Shortcomings of the {TACRED} Dataset},
|
198 |
+
journal = {CoRR},
|
199 |
+
volume = {abs/2104.08398},
|
200 |
+
year = {2021},
|
201 |
+
url = {https://arxiv.org/abs/2104.08398},
|
202 |
+
eprinttype = {arXiv},
|
203 |
+
eprint = {2104.08398},
|
204 |
+
timestamp = {Mon, 26 Apr 2021 17:25:10 +0200},
|
205 |
+
biburl = {https://dblp.org/rec/journals/corr/abs-2104-08398.bib},
|
206 |
+
bibsource = {dblp computer science bibliography, https://dblp.org}
|
207 |
+
}
|
208 |
+
```
|
209 |
+
|
210 |
### Contributions
|
211 |
+
Thanks to [@dfki-nlp](https://github.com/dfki-nlp) and [@phucdev](https://github.com/phucdev) for adding this dataset.
|