File size: 4,934 Bytes
b95afe5 dbb77b9 dda1e5e dbb77b9 731401d 2fca166 51ad3ba b95afe5 947975d dda1e5e 947975d dda1e5e fa9ca7c dda1e5e 947975d dda1e5e 947975d dda1e5e dbb77b9 c9bc550 947975d c9bc550 ec86066 f7f60e6 1dfa3a1 dbb77b9 1dfa3a1 dda1e5e 947975d dda1e5e 947975d dda1e5e b86e966 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 |
---
language:
- pt
thumbnail: "Portugues SBERT for the Legal Domain"
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- transformers
datasets:
- assin
- assin2
widget:
- source_sentence: "O advogado apresentou as provas ao juíz."
sentences:
- "O juíz leu as provas."
- "O juíz leu o recurso."
- "O juíz atirou uma pedra."
example_title: "Example 1"
metrics:
- bleu
---
# rufimelo/Legal-SBERTimbau-sts-large
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
rufimelo/Legal-SBERTimbau-sts-large is based on Legal-BERTimbau-large which derives from [BERTimbau](https://huggingface.co/neuralmind/bert-large-portuguese-cased) alrge.
It is adapted to the Portuguese legal domain and trained for STS on portuguese datasets.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Isto é um exemplo", "Isto é um outro exemplo"]
model = SentenceTransformer('rufimelo/Legal-SBERTimbau-sts-large')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('rufimelo/Legal-SBERTimbau-sts-large')
model = AutoModel.from_pretrained('rufimelo/Legal-SBERTimbau-sts-large')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results STS
| Model| Dataset | PearsonCorrelation |
| ---------------------------------------- | ---------- | ---------- |
| Legal-SBERTimbau-sts-large| Assin | 0.76629 |
| Legal-SBERTimbau-sts-large| Assin2| 0.82357 |
| Legal-SBERTimbau-sts-base| Assin | 0.71457 |
| Legal-SBERTimbau-sts-base| Assin2| 0.73545|
| Legal-SBERTimbau-sts-large-v2| Assin | 0.76299 |
| Legal-SBERTimbau-sts-large-v2| Assin2| 0.81121 |
| Legal-SBERTimbau-sts-large-v2| stsb_multi_mt pt| 0.81726 |
| ---------------------------------------- | ---------- |---------- |
| paraphrase-multilingual-mpnet-base-v2| Assin | 0.71457|
| paraphrase-multilingual-mpnet-base-v2| Assin2| 0.79831 |
| paraphrase-multilingual-mpnet-base-v2| stsb_multi_mt pt| 0.83999 |
| paraphrase-multilingual-mpnet-base-v2 Fine tuned with assin(s)| Assin | 0.77641 |
| paraphrase-multilingual-mpnet-base-v2 Fine tuned with assin(s)| Assin2| 0.79831 |
| paraphrase-multilingual-mpnet-base-v2 Fine tuned with assin(s)| stsb_multi_mt pt| 0.84575 |
## Training
rufimelo/Legal-SBERTimbau-sts-large is based on Legal-BERTimbau-largewhich derives from [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased) large.
It was trained for Semantic Textual Similarity, being submitted to a fine tuning stage with the [assin](https://huggingface.co/datasets/assin) and [assin2](https://huggingface.co/datasets/assin2) datasets.
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
If you use this work, please cite BERTimbau's work:
```bibtex
@inproceedings{souza2020bertimbau,
author = {F{\'a}bio Souza and
Rodrigo Nogueira and
Roberto Lotufo},
title = {{BERT}imbau: pretrained {BERT} models for {B}razilian {P}ortuguese},
booktitle = {9th Brazilian Conference on Intelligent Systems, {BRACIS}, Rio Grande do Sul, Brazil, October 20-23 (to appear)},
year = {2020}
}
``` |