metadata
license: cc-by-sa-4.0
task_categories:
- token-classification
language:
- ar
size_categories:
- 100K<n<1M
AREEj: Arabic Relation Extraction with Evidence
This dataset was made by adding evidence annotations to the Arabic subset of SREDFM. The dataset is from the Proceedings of The Second Arabic Natural Language Processing Conference paper AREEj: Arabic Relation Extraction with Evidence. If you use the dataset or the model, please reference this work in your paper:
@inproceedings{mraikhat-etal-2024-areej,
title = "{AREE}j: {A}rabic Relation Extraction with Evidence",
author = "Mraikhat, Osama and
Hamoud, Hadi and
Zaraket, Fadi",
editor = "Habash, Nizar and
Bouamor, Houda and
Eskander, Ramy and
Tomeh, Nadi and
Abu Farha, Ibrahim and
Abdelali, Ahmed and
Touileb, Samia and
Hamed, Injy and
Onaizan, Yaser and
Alhafni, Bashar and
Antoun, Wissam and
Khalifa, Salam and
Haddad, Hatem and
Zitouni, Imed and
AlKhamissi, Badr and
Almatham, Rawan and
Mrini, Khalil",
booktitle = "Proceedings of The Second Arabic Natural Language Processing Conference",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.arabicnlp-1.6",
pages = "67--72",
abstract = "Relational entity extraction is key in building knowledge graphs. A relational entity has a source, a tail and atype. In this paper, we consider Arabic text and introduce evidence enrichment which intuitivelyinforms models for better predictions. Relational evidence is an expression in the textthat explains how sources and targets relate. {\%}It also provides hints from which models learn. This paper augments the existing relational extraction dataset with evidence annotation to its 2.9-million Arabic relations.We leverage the augmented dataset to build , a relation extraction with evidence model from Arabic documents. The evidence augmentation model we constructed to complete the dataset achieved .82 F1-score (.93 precision, .73 recall). The target outperformed SOTA mREBEL with .72 F1-score (.78 precision, .66 recall).",
}
License
ArSRED is licensed under the CC BY-SA 4.0 license. The text of the license can be found here.