Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
datasets:
|
3 |
+
- cyanic-selkie/aida-conll-yago-wikidata
|
4 |
+
language:
|
5 |
+
- en
|
6 |
+
metrics:
|
7 |
+
- accuracy
|
8 |
+
- f1
|
9 |
+
tags:
|
10 |
+
- entity linking
|
11 |
+
- entity disambiguation
|
12 |
+
- EL
|
13 |
+
- ReFinED
|
14 |
+
- RoBERTa
|
15 |
+
---
|
16 |
+
|
17 |
+
# Model Card for LLMAEL-ReFinED-FT
|
18 |
+
|
19 |
+
<p align="justify">
|
20 |
+
|
21 |
+
We introduce <b>LLMAEL</b> (<b>LLM</b>-<b>A</b>ugmented <b>E</b>ntity <b>L</b>inking), a pipeline method to enhance entity linking through LLM data augmentation.
|
22 |
+
We release our customly fine-tuned <b>LLMAEL-ReFinED-FT</b> model, which is fine-tuned from the <b>ReFinED</b> EL model using an <b>Llama-3-70b</b> augmented version of the <b>AIDA_train</b> dataset.
|
23 |
+
LLMAEL-ReFinED-FT yields new SOTA results across six standard EL benchmarks: AIDA_test, MSNBC, AQUAINT, ACE2004, WNED-CLUEWEB, and WNED-WIKIPEDIA, achieving an average 1.21% accuracy gain.
|
24 |
+
|
25 |
+
For more details, refer to our paper 📖 [LLMAEL: Large Language Models are Good Context Augmenters for Entity Linking](https://arxiv.org/abs/2407.04020)
|
26 |
+
</p>
|
27 |
+
|
28 |
+
|
29 |
+
### Model Description
|
30 |
+
|
31 |
+
- **Developed by:** Amy Xin, Yunjia Qi, Zijun Yao, Fangwei Zhu, Kaisheng Zeng, Bin Xu, Lei Hou, Juanzi Li
|
32 |
+
- **Model type:** Entity Linking Model
|
33 |
+
- **Language(s):** English
|
34 |
+
- **Finetuned from model [optional]:** [ReFinED](https://arxiv.org/abs/2207.04108)
|