File size: 1,222 Bytes
958e145 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
---
datasets:
- cyanic-selkie/aida-conll-yago-wikidata
language:
- en
metrics:
- accuracy
- f1
tags:
- entity linking
- entity disambiguation
- EL
- ReFinED
- RoBERTa
---
# Model Card for LLMAEL-ReFinED-FT
<p align="justify">
We introduce <b>LLMAEL</b> (<b>LLM</b>-<b>A</b>ugmented <b>E</b>ntity <b>L</b>inking), a pipeline method to enhance entity linking through LLM data augmentation.
We release our customly fine-tuned <b>LLMAEL-ReFinED-FT</b> model, which is fine-tuned from the <b>ReFinED</b> EL model using an <b>Llama-3-70b</b> augmented version of the <b>AIDA_train</b> dataset.
LLMAEL-ReFinED-FT yields new SOTA results across six standard EL benchmarks: AIDA_test, MSNBC, AQUAINT, ACE2004, WNED-CLUEWEB, and WNED-WIKIPEDIA, achieving an average 1.21% accuracy gain.
For more details, refer to our paper 📖 [LLMAEL: Large Language Models are Good Context Augmenters for Entity Linking](https://arxiv.org/abs/2407.04020)
</p>
### Model Description
- **Developed by:** Amy Xin, Yunjia Qi, Zijun Yao, Fangwei Zhu, Kaisheng Zeng, Bin Xu, Lei Hou, Juanzi Li
- **Model type:** Entity Linking Model
- **Language(s):** English
- **Finetuned from model [optional]:** [ReFinED](https://arxiv.org/abs/2207.04108) |