|
--- |
|
datasets: |
|
- cyanic-selkie/aida-conll-yago-wikidata |
|
language: |
|
- en |
|
metrics: |
|
- accuracy |
|
- f1 |
|
tags: |
|
- entity linking |
|
- entity disambiguation |
|
- EL |
|
- ReFinED |
|
- RoBERTa |
|
--- |
|
|
|
# Model Card for LLMAEL-ReFinED-FT |
|
|
|
<p align="justify"> |
|
|
|
We introduce <b>LLMAEL</b> (<b>LLM</b>-<b>A</b>ugmented <b>E</b>ntity <b>L</b>inking), a pipeline method to enhance entity linking through LLM data augmentation. |
|
We release our customly fine-tuned <b>LLMAEL-ReFinED-FT</b> model, which is fine-tuned from the <b>ReFinED</b> EL model using an <b>Llama-3-70b</b> augmented version of the <b>AIDA_train</b> dataset. |
|
LLMAEL-ReFinED-FT yields new SOTA results across six standard EL benchmarks: AIDA_test, MSNBC, AQUAINT, ACE2004, WNED-CLUEWEB, and WNED-WIKIPEDIA, achieving an average 1.21% accuracy gain. |
|
|
|
For more details, refer to our paper ๐ [LLMAEL: Large Language Models are Good Context Augmenters for Entity Linking](https://arxiv.org/abs/2407.04020) |
|
</p> |
|
|
|
|
|
### Model Description |
|
|
|
- **Developed by:** Amy Xin, Yunjia Qi, Zijun Yao, Fangwei Zhu, Kaisheng Zeng, Bin Xu, Lei Hou, Juanzi Li |
|
- **Model type:** Entity Linking Model |
|
- **Language(s):** English |
|
- **Finetuned from model [optional]:** [ReFinED](https://arxiv.org/abs/2207.04108) |