|
--- |
|
title: README |
|
emoji: 🔥 |
|
colorFrom: red |
|
colorTo: indigo |
|
sdk: static |
|
pinned: false |
|
--- |
|
|
|
<img src="https://raw.githubusercontent.com/asahi417/relbert/test/assets/relbert_logo.png" alt="" width="150" style="margin-left:'auto' margin-right:'auto' display:'block'"/> |
|
|
|
<br> |
|
|
|
RelBERT is a high-quality semantic representative embedding of word pairs powered by pre-trained language model. |
|
Install <a href="https://pypi.org/project/relbert/">relbert</a> via pip, |
|
|
|
<pre class="line-numbers"> |
|
<code class="language-python"> |
|
pip install relbert |
|
</code> |
|
</pre> |
|
|
|
and play with RelBERT models. |
|
|
|
<pre class="line-numbers"> |
|
<code class="language-python"> |
|
from relbert import RelBERT |
|
model = RelBERT('relbert/relbert-roberta-large') |
|
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) |
|
</code> |
|
</pre> |
|
|
|
See more information bellow. |
|
<ul> |
|
<li> - GitHub: <a href="https://github.com/asahi417/relbert">https://github.com/asahi417/relbert</a></li> |
|
<li> - Paper (EMNLP 2021 main conference): <a href="https://arxiv.org/abs/2110.15705">https://arxiv.org/abs/2110.15705</a></li> |
|
<li> - HuggingFace: <a href="https://huggingface.co/relbert">https://huggingface.co/relbert</a></li> |
|
<li> - PyPI: <a href="https://pypi.org/project/relbert">https://pypi.org/project/relbert</a></li> |
|
</ul> |
|
|