File size: 1,352 Bytes
535a464 8cdab60 a1aaffe 3bc0ab0 9e58289 b238f7e 9e58289 b238f7e a1aaffe 8cdab60 6e23a86 8cdab60 a1aaffe 8cdab60 756b603 b238f7e 756b603 8cdab60 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
---
title: README
emoji: 🔥
colorFrom: red
colorTo: indigo
sdk: static
pinned: false
---
<img src="https://raw.githubusercontent.com/asahi417/relbert/test/assets/relbert_logo.png" alt="" width="150" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<br>
RelBERT is a high-quality semantic representative embedding of word pairs powered by pre-trained language model.
Install <a href="https://pypi.org/project/relbert/">relbert</a> via pip,
<pre class="line-numbers">
<code class="language-python">
pip install relbert
</code>
</pre>
and play with RelBERT models.
<pre class="line-numbers">
<code class="language-python">
from relbert import RelBERT
model = RelBERT('relbert/relbert-roberta-large')
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
</code>
</pre>
See more information bellow.
<ul>
<li> - GitHub: <a href="https://github.com/asahi417/relbert">https://github.com/asahi417/relbert</a></li>
<li> - Paper (EMNLP 2021 main conference): <a href="https://arxiv.org/abs/2110.15705">https://arxiv.org/abs/2110.15705</a></li>
<li> - HuggingFace: <a href="https://huggingface.co/relbert">https://huggingface.co/relbert</a></li>
<li> - PyPI: <a href="https://pypi.org/project/relbert">https://pypi.org/project/relbert</a></li>
</ul>
|