Update README.md
Browse files
README.md
CHANGED
@@ -14,6 +14,8 @@ The model performance on the test sets are:
|
|
14 |
|NER-PMR-large (multi-task model)| 92.9 | 54.7| 87.8| 88.4|
|
15 |
|
16 |
Note that the performance of RoBERTa-large and PMR-large are single-task fine-tuning, while NER-PMR-large is a multi-task fine-tuned model.
|
|
|
|
|
17 |
|
18 |
### How to use
|
19 |
You can try the codes from [this repo](https://github.com/DAMO-NLP-SG/PMR/NER) for both training and inference.
|
|
|
14 |
|NER-PMR-large (multi-task model)| 92.9 | 54.7| 87.8| 88.4|
|
15 |
|
16 |
Note that the performance of RoBERTa-large and PMR-large are single-task fine-tuning, while NER-PMR-large is a multi-task fine-tuned model.
|
17 |
+
As it is fine-tuned on multiple datasets, we believe that NER-PMR-large has a better generalization capability to other NER tasks than PMR-large and RoBERTa-large.
|
18 |
+
|
19 |
|
20 |
### How to use
|
21 |
You can try the codes from [this repo](https://github.com/DAMO-NLP-SG/PMR/NER) for both training and inference.
|