Update README.md
Browse files
README.md
CHANGED
@@ -77,7 +77,7 @@ Evaluated with [JMTEB](https://github.com/sbintuitions/JMTEB).
|
|
77 |
||||||||||
|
78 |
|[Ruri-Small](https://huggingface.co/cl-nagoya/ruri-small)|68M|69.41|82.79|76.22|93.00|51.19|62.11|71.53|
|
79 |
|[Ruri-Base](https://huggingface.co/cl-nagoya/ruri-base)|111M|69.82|82.87|75.58|92.91|54.16|62.38|71.91|
|
80 |
-
|[Ruri-Large](https://huggingface.co/cl-nagoya/ruri-large)|337M|73.02|83.13|77.43|92.99|51.82|62.29|73.31|
|
81 |
|
82 |
|
83 |
|
@@ -93,16 +93,10 @@ Evaluated with [JMTEB](https://github.com/sbintuitions/JMTEB).
|
|
93 |
- **License:** Apache 2.0
|
94 |
<!-- - **Training Dataset:** Unknown -->
|
95 |
|
96 |
-
### Model Sources
|
97 |
-
|
98 |
-
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
|
99 |
-
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
|
100 |
-
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
|
101 |
-
|
102 |
### Full Model Architecture
|
103 |
|
104 |
```
|
105 |
-
|
106 |
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
|
107 |
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
|
108 |
)
|
|
|
77 |
||||||||||
|
78 |
|[Ruri-Small](https://huggingface.co/cl-nagoya/ruri-small)|68M|69.41|82.79|76.22|93.00|51.19|62.11|71.53|
|
79 |
|[Ruri-Base](https://huggingface.co/cl-nagoya/ruri-base)|111M|69.82|82.87|75.58|92.91|54.16|62.38|71.91|
|
80 |
+
|[**Ruri-Large**](https://huggingface.co/cl-nagoya/ruri-large) (this model)|337M|73.02|83.13|77.43|92.99|51.82|62.29|73.31|
|
81 |
|
82 |
|
83 |
|
|
|
93 |
- **License:** Apache 2.0
|
94 |
<!-- - **Training Dataset:** Unknown -->
|
95 |
|
|
|
|
|
|
|
|
|
|
|
|
|
96 |
### Full Model Architecture
|
97 |
|
98 |
```
|
99 |
+
SentenceTransformer(
|
100 |
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
|
101 |
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
|
102 |
)
|