asahi417 commited on
Commit
c11f276
1 Parent(s): ad1d960

model update

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -79,7 +79,7 @@ widget:
79
  # tner/roberta-large-tweetner7-2020-selflabel2020-continuous
80
 
81
  This model is a fine-tuned version of [tner/roberta-large-tweetner-2020](https://huggingface.co/tner/roberta-large-tweetner-2020) on the
82
- [tner/tweetner7](https://huggingface.co/datasets/tner/tweetner7) dataset (`train` split). This model is fine-tuned on self-labeled dataset which is the `extra_None` split of the [tner/tweetner7](https://huggingface.co/datasets/tner/tweetner7) annotated by [None](https://huggingface.co/None-tweetner7-2020)). Please check [https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper#model-fine-tuning-self-labeling](https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper#model-fine-tuning-self-labeling) for more detail of reproducing the model. The model is first fine-tuned on `train_2020`, and then continuously fine-tuned on the self-labeled dataset.
83
  Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository
84
  for more detail). It achieves the following results on the test set of 2021:
85
  - F1 (micro): 0.6514522821576764
 
79
  # tner/roberta-large-tweetner7-2020-selflabel2020-continuous
80
 
81
  This model is a fine-tuned version of [tner/roberta-large-tweetner-2020](https://huggingface.co/tner/roberta-large-tweetner-2020) on the
82
+ [tner/tweetner7](https://huggingface.co/datasets/tner/tweetner7) dataset (`train` split). This model is fine-tuned on self-labeled dataset which is the `extra_2020` split of the [tner/tweetner7](https://huggingface.co/datasets/tner/tweetner7) annotated by [tner/roberta-large](https://huggingface.co/tner/roberta-large-tweetner7-2020)). Please check [https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper#model-fine-tuning-self-labeling](https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper#model-fine-tuning-self-labeling) for more detail of reproducing the model. The model is first fine-tuned on `train_2020`, and then continuously fine-tuned on the self-labeled dataset.
83
  Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository
84
  for more detail). It achieves the following results on the test set of 2021:
85
  - F1 (micro): 0.6514522821576764