model update
Browse files
README.md
CHANGED
@@ -2,7 +2,7 @@
|
|
2 |
datasets:
|
3 |
- relbert/conceptnet_relational_similarity
|
4 |
model-index:
|
5 |
-
- name: relbert/relbert-roberta-base-nce-
|
6 |
results:
|
7 |
- task:
|
8 |
name: Relation Mapping
|
@@ -186,11 +186,11 @@ model-index:
|
|
186 |
value: 0.8877705847530848
|
187 |
|
188 |
---
|
189 |
-
# relbert/relbert-roberta-base-nce-
|
190 |
|
191 |
RelBERT based on [roberta-base](https://huggingface.co/roberta-base) fine-tuned on [relbert/conceptnet_relational_similarity](https://huggingface.co/datasets/relbert/conceptnet_relational_similarity) (see the [`relbert`](https://github.com/asahi417/relbert) for more detail of fine-tuning).
|
192 |
This model achieves the following results on the relation understanding tasks:
|
193 |
-
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-nce-
|
194 |
- Accuracy on SAT (full): 0.44919786096256686
|
195 |
- Accuracy on SAT: 0.4421364985163205
|
196 |
- Accuracy on BATS: 0.6197887715397443
|
@@ -200,13 +200,13 @@ This model achieves the following results on the relation understanding tasks:
|
|
200 |
- Accuracy on ConceptNet Analogy: 0.2273489932885906
|
201 |
- Accuracy on T-Rex Analogy: 0.44808743169398907
|
202 |
- Accuracy on NELL-ONE Analogy: 0.6616666666666666
|
203 |
-
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-nce-
|
204 |
- Micro F1 score on BLESS: 0.8880518306463764
|
205 |
- Micro F1 score on CogALexV: 0.828169014084507
|
206 |
- Micro F1 score on EVALution: 0.628385698808234
|
207 |
- Micro F1 score on K&H+N: 0.9501982332892815
|
208 |
- Micro F1 score on ROOT09: 0.8884362268881228
|
209 |
-
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-nce-
|
210 |
- Accuracy on Relation Mapping: 0.8411507936507936
|
211 |
|
212 |
|
@@ -218,7 +218,7 @@ pip install relbert
|
|
218 |
and activate model as below.
|
219 |
```python
|
220 |
from relbert import RelBERT
|
221 |
-
model = RelBERT("relbert/relbert-roberta-base-nce-
|
222 |
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (n_dim, )
|
223 |
```
|
224 |
|
@@ -242,7 +242,7 @@ vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (n_dim, )
|
|
242 |
- loss_function_config: {'temperature': 0.05, 'num_negative': 300, 'num_positive': 30}
|
243 |
- augment_negative_by_positive: True
|
244 |
|
245 |
-
See the full configuration at [config file](https://huggingface.co/relbert/relbert-roberta-base-nce-
|
246 |
|
247 |
### Reference
|
248 |
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.emnlp-main.712/).
|
|
|
2 |
datasets:
|
3 |
- relbert/conceptnet_relational_similarity
|
4 |
model-index:
|
5 |
+
- name: relbert/relbert-roberta-base-nce-conceptnet
|
6 |
results:
|
7 |
- task:
|
8 |
name: Relation Mapping
|
|
|
186 |
value: 0.8877705847530848
|
187 |
|
188 |
---
|
189 |
+
# relbert/relbert-roberta-base-nce-conceptnet
|
190 |
|
191 |
RelBERT based on [roberta-base](https://huggingface.co/roberta-base) fine-tuned on [relbert/conceptnet_relational_similarity](https://huggingface.co/datasets/relbert/conceptnet_relational_similarity) (see the [`relbert`](https://github.com/asahi417/relbert) for more detail of fine-tuning).
|
192 |
This model achieves the following results on the relation understanding tasks:
|
193 |
+
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-nce-conceptnet/raw/main/analogy.forward.json)):
|
194 |
- Accuracy on SAT (full): 0.44919786096256686
|
195 |
- Accuracy on SAT: 0.4421364985163205
|
196 |
- Accuracy on BATS: 0.6197887715397443
|
|
|
200 |
- Accuracy on ConceptNet Analogy: 0.2273489932885906
|
201 |
- Accuracy on T-Rex Analogy: 0.44808743169398907
|
202 |
- Accuracy on NELL-ONE Analogy: 0.6616666666666666
|
203 |
+
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-nce-conceptnet/raw/main/classification.json)):
|
204 |
- Micro F1 score on BLESS: 0.8880518306463764
|
205 |
- Micro F1 score on CogALexV: 0.828169014084507
|
206 |
- Micro F1 score on EVALution: 0.628385698808234
|
207 |
- Micro F1 score on K&H+N: 0.9501982332892815
|
208 |
- Micro F1 score on ROOT09: 0.8884362268881228
|
209 |
+
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-nce-conceptnet/raw/main/relation_mapping.json)):
|
210 |
- Accuracy on Relation Mapping: 0.8411507936507936
|
211 |
|
212 |
|
|
|
218 |
and activate model as below.
|
219 |
```python
|
220 |
from relbert import RelBERT
|
221 |
+
model = RelBERT("relbert/relbert-roberta-base-nce-conceptnet")
|
222 |
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (n_dim, )
|
223 |
```
|
224 |
|
|
|
242 |
- loss_function_config: {'temperature': 0.05, 'num_negative': 300, 'num_positive': 30}
|
243 |
- augment_negative_by_positive: True
|
244 |
|
245 |
+
See the full configuration at [config file](https://huggingface.co/relbert/relbert-roberta-base-nce-conceptnet/raw/main/finetuning_config.json).
|
246 |
|
247 |
### Reference
|
248 |
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.emnlp-main.712/).
|