File size: 11,845 Bytes
9d40961
39d5106
 
 
 
 
 
 
 
 
9d40961
39d5106
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
---
datasets:
- assin2
language:
- pt
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- nli
---

# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->

This is a **[BERTimbau-base](https://huggingface.co/neuralmind/bert-large-portuguese-cased) fine-tuned model** on 5K (premise, hypothesis) sentence pairsfrom
the **PLUE/MNLI (Portuguese translation of the SNLI's GLUE benchmark)** corpus. The original references are:
[Unsupervised Cross-Lingual Representation Learning At Scale](https://arxiv.org/pdf/1911.02116), [PLUE](https://huggingface.co/datasets/dlb/plue), respectivelly. This model is suitable for Portuguese.

## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->

- **Developed by:** Giovani Tavares and Felipe Ribas Serras
- **Oriented By:** Felipe Ribas Serras, Renata Wassermann and Marcelo Finger
- **Model type:** Transformer-based text classifier
- **Language(s) (NLP):** Portuguese
- **License:** mit
- **Finetuned from model** [BERTimbau-base](https://huggingface.co/neuralmind/bert-large-portuguese-cased)

### Model Sources

<!-- Provide the basic links for the model. -->

- **Repository:** [Natural-Portuguese-Language-Inference](https://github.com/giogvn/Natural-Portuguese-Language-Inference)
- **Paper:** This is an ongoing research. We are currently writing a paper where we fully describe our experiments.

## Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->

### Direct Use

This fine-tuned version of [BERTimbau-base](https://huggingface.co/neuralmind/bert-large-portuguese-cased) performs Natural
Language Inference (NLI), which is a text classification task.

<!-- <div id="assin_function">

**Definition 1.** Given a pair of sentences $$(premise, hypothesis)$, let $\hat{f}^{(xlmr\_base)}$ be the fine-tuned models' inference function:

$$
\hat{f}^{(xlmr\_base)} = 
\begin{cases} 
ENTAILMENT, & \text{if $premise$ entails $hypothesis$}\\
PARAPHRASE, & \text{if $premise$ entails $hypothesis$ and $hypothesis$ entails $premise$}\\
NONE & \text{otherwise}
\end{cases}
$$
</div> -->


The *(premise, hypothesis)* entailment definition used is the same as the one found in Salvatore's paper [1].

Therefore, this fine-tuned version of [BERTimbau-base](https://huggingface.co/neuralmind/bert-large-portuguese-cased) classifies pairs of sentences in the form *(premise, hypothesis)* into the classes *entailment*, *neutral* and *contradiction*.

<!-- ## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->


## Demo

```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch

model_path = "giotvr/bertimbau_large_plue_mnli_fine_tuned"
premise = "As mudanças climáticas são uma ameaça séria para a biodiversidade do planeta."
hypothesis ="A biodiversidade do planeta é seriamente ameaçada pelas mudanças climáticas."
tokenizer = XLMRobertaTokenizer.from_pretrained(model_path, use_auth_token=True)
input_pair = tokenizer(premise, hypothesis, return_tensors="pt",padding=True, truncation=True)
model = AutoModelForSequenceClassification.from_pretrained(model_path, use_auth_token=True)

with torch.no_grad():
    logits = model(**input_pair).logits
probs = torch.nn.functional.softmax(logits, dim=-1)
probs, sorted_indices = torch.sort(probs, descending=True)
for i, score in enumerate(probs[0]):
    print(f"Class {sorted_indices[0][i]}: {score.item():.4f}")
```
### Recommendations

<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->

This model should be used for scientific purposes only. It was not tested for production environments.

<!-- ## How to Get Started with the Model

Use the code below to get started with the model.

[More Information Needed] -->

## Fine-Tuning Details

### Fine-Tuning Data

<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
---

- **Train Dataset**: [PLUE/MNLI](https://huggingface.co/datasets/dlb/plue) <br>

- **Evaluation Dataset used for Hyperparameter Tuning:** [PLUE/MNLI](https://huggingface.co/datasets/dlb/plue)'s validation split

- **Test Datasets:**
    - [ASSIN](https://huggingface.co/datasets/assin)'s test split
    - [ASSIN2](https://huggingface.co/datasets/assin2)'s test split
    - [PLUE/MNLI](https://huggingface.co/datasets/dlb/plue/viewer/mnli_matched)'s validation matched split


---
This is a fine tuned version of  [BERTimbau-base](https://huggingface.co/neuralmind/bert-large-portuguese-cased)  using the [ASSIN2 (Avaliação de Similaridade Semântica e Inferência textual)](https://huggingface.co/datasets/assin2) dataset. [ASSIN2](https://huggingface.co/datasets/assin2) is a corpus annotated with hypothesis/premise Portuguese sentence pairs suitable for detecting textual entailment or neutral 
relationship between the members of such pairs. Such corpus is balanced with 7k *ptbr* (Brazilian Portuguese) sentence pairs.

### Fine-Tuning Procedure 

<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
The model's fine-tuning procedure can be summarized in three major subsequent tasks:
    <ol type="i">
        <li>**Data Processing:**</li> [PLUE/MNLI](https://huggingface.co/datasets/dlb/plue)'s *validation* and *train* splits were loaded from the **Hugging Face Hub** and processed afterwards; 
        <li>**Hyperparameter Tuning:**</li>[BERTimbau-base](https://huggingface.co/neuralmind/bert-large-portuguese-cased)'s hyperparameters were chosen with the help of the [Weights & Biases] API to track the results and upload the fine-tuned models;
        <li>**Final Model Loading and Testing:**</li>
        using the *cross-tests* approach described in the [this section](#evaluation), the models' performance were measured using different datasets and metrics. 
    </ol>


<!--  ##### Column Renaming
The **Hugging Face**'s ```transformers``` module's ```DataCollator``` used by its ```Trainer``` requires that the ```class label``` column of the collated dataset to be called ```label```.  [ASSIN](https://huggingface.co/datasets/assin)'s class label column for each hypothesis/premise pair is called ```entailment_judgement```. Therefore, as the first step of the data preprocessing pipeline the column  ```entailment_judgement``` was renamed to ```label``` so that the **Hugging Face**'s ```transformers``` module's ```Trainer``` could be used. -->

#### Hyperparameter Tuning

<!-- The model's training hyperparameters were chosen according to the following definition:

<div id="hyperparameter_tuning">

**Definition 2.** Let $Hyperparms= \{i: i \text{ is an hyperparameter of } \hat{f}^{(xlmr\_base)}\}$ and $\hat{f}^{(xlmr\_base)}$ be the model's inference function defined in [Definition 1](#assin_function) :

$$
Hyperparms = \argmax_{hyp}(eval\_acc(\hat{f}^{(xlmr\_base)}_{hyp}, assin\_validation))
$$
</div> -->

The following hyperparameters were tested in order to maximize the evaluation accuracy.

- **Number of Training Epochs:** $(4,5,6)$
- **Per Device Train Batch Size:** $(8,16,32)$
- **Learning Rate:** $(3e−5, 2e−5, 3e−5)$


The hyperparemeter tuning experiments were run and tracked using the [Weights & Biases' API](https://docs.wandb.ai/ref/python/public-api/api) and can be found at this [link](https://wandb.ai/gio_projs/assin_xlm_roberta_v5?workspace=user-giogvn).


#### Training Hyperparameters

The [hyperparameter tuning](#hyperparameter-tuning) performed yelded the following values:

- **Number of Training Epochs:** $6$
- **Per Device Train Batch Size:** $16$
- **Learning Rate:** $5e-5$

## Evaluation

### ASSIN

Testing this model in ASSIN's test split required some translation of the *NONE* and *PARAPHRASE* classes found in it, because such classes are not present in PLUE/MNLI. The *NONE* class was considered *contradiction* or *neutral*, and *PARAPHRASE* was considered *entailment* in both ways: from premise to hypothesis and from hypothesis to premise. More details on such translation can be found in [Modelos para Inferência em Linguagem Natural que entendem a Língua Portuguesa](https://linux.ime.usp.br/~giovani/).  

### ASSIN2

Testing this model in ASSIN2's test split required some translation of the *NONE* classe found in it, because such class is not present in PLUE/MNLI. The *NONE* class was considered *contradiction* or *neutral*. More details on such translation can be found in [Modelos para Inferência em Linguagem Natural que entendem a Língua Portuguesa](https://linux.ime.usp.br/~giovani/).  


### PLUE/MNLI

Testing this model in PLUE/MNLI's test set was straightforward as it was fine-tuned in its training set.

More information on how such mapping is performed can be found in [Modelos para Inferência em Linguagem Natural que entendem a Língua Portuguesa](https://linux.ime.usp.br/~giovani/).


### Metrics

<!-- These are the evaluation metrics being used, ideally with a description of why. -->
The model's performance metrics for each test dataset are presented separately. Accuracy, f1 score, precision and recall were the metrics used to every evaluation performed. Such metrics are reported below. More information on such metrics them will be available in our ongoing research paper.

### Results

| test set | accuracy | f1 score | precision | recall |
|----------|----------|----------|-----------|--------|
| assin    |0.72      |0.67      |0.63       |0.73    |
| assin2   |0.87      |0.87      |0.88       |0.87    |
| plue/mnli|0.84      |0.83      |0.84       |0.84    |

## Model Examination

<!-- Relevant interpretability work for the model goes here -->
Some interpretability work is being done in order to understand the model's behavior. Such details will be available in the previoulsy referred paper.

<!--## Environmental Impact

<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly

Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).

- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed] -->

<!-- ## Citation

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section.

**BibTeX:**

```bibtex
    @article{tcc_paper,
    author    = {Giovani Tavares and Felipe Ribas Serras and Renata Wassermann and Marcelo Finger},
    title     = {Modelos Transformer para Inferência de Linguagem Natural em Português},
    pages     = {x--y},
    year      = {2023}
    }
``` -->

## References

[1][Salvatore, F. S. (2020). Analyzing Natural Language Inference from a Rigorous Point of View (pp. 1-2).](https://www.teses.usp.br/teses/disponiveis/45/45134/tde-05012021-151600/publico/tese_de_doutorado_felipe_salvatore.pdf)

<!--[2][Andrade, G. T. (2023)  Modelos para Inferência em Linguagem Natural que entendem a Língua Portuguesa  (train_assin_xlmr_base_results PAGES GO HERE)](https://linux.ime.usp.br/~giovani/)

[3][Andrade, G. T. (2023)  Modelos para Inferência em Linguagem Natural que entendem a Língua Portuguesa (train_assin_xlmr_base_conclusions PAGES GO HERE)](https://linux.ime.usp.br/~giovani/) -->