File size: 4,559 Bytes
491d292
 
 
 
dda63a4
491d292
 
 
 
 
 
 
324594f
491d292
dda63a4
491d292
 
dda63a4
491d292
dda63a4
 
491d292
dda63a4
 
491d292
dda63a4
491d292
dda63a4
 
 
 
 
 
 
 
 
 
 
 
 
 
491d292
 
 
dda63a4
491d292
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dda63a4
 
 
 
 
 
 
 
491d292
 
 
 
 
 
dda63a4
 
 
491d292
dda63a4
491d292
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95

---
---
language:
- pt
tags:
- text-classification
- pytorch
metrics:
- accuracy
- f1-score
---
# xlm-roberta-large-portuguese-execspeech-cap-v3
## Model description
An `xlm-roberta-large` model fine-tuned on portuguese training data containing executive speeches labeled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).

## How to use the model
This snippet prints the three most probable labels and their corresponding softmax scores:
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer

model = AutoModelForSequenceClassification.from_pretrained("poltextlab/xlm-roberta-large-portuguese-execspeech-cap-v3")
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large")

sentence = "This is an example."

inputs = tokenizer(sentence,
                   return_tensors="pt",
                   max_length=512,
                   padding="do_not_pad",
                   truncation=True
                   )

logits = model(**inputs).logits

probs = torch.softmax(logits, dim=1).tolist()[0]
probs = {model.config.id2label[index]: round(probability, 2) for index, probability in enumerate(probs)}
top3_probs = dict(sorted(probs.items(), key=lambda item: item[1], reverse=True)[:3])

print(top3_probs)
```

## Model performance
The model was evaluated on a test set of 364 examples.<br>
Model accuracy is **0.74**.
| label        |   precision |   recall |   f1-score |   support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0            |        0.71 |     0.9  |       0.79 |        40 |
| 1            |        0.57 |     0.6  |       0.59 |        20 |
| 2            |        1    |     1    |       1    |        13 |
| 3            |        0.5  |     0.78 |       0.61 |         9 |
| 4            |        1    |     0.69 |       0.81 |        16 |
| 5            |        0.91 |     1    |       0.95 |        21 |
| 6            |        0.35 |     0.78 |       0.48 |         9 |
| 7            |        0.89 |     0.81 |       0.85 |        21 |
| 8            |        0    |     0    |       0    |         0 |
| 9            |        0.74 |     1    |       0.85 |        14 |
| 10           |        0.71 |     0.86 |       0.77 |        14 |
| 11           |        0.74 |     0.9  |       0.81 |        29 |
| 12           |        0.69 |     0.69 |       0.69 |        13 |
| 13           |        0    |     0    |       0    |         5 |
| 14           |        1    |     0.38 |       0.55 |         8 |
| 15           |        1    |     0.55 |       0.71 |        11 |
| 16           |        0.79 |     0.58 |       0.67 |        19 |
| 17           |        0.78 |     0.85 |       0.82 |        34 |
| 18           |        0.66 |     0.76 |       0.7  |        33 |
| 19           |        0    |     0    |       0    |        11 |
| 20           |        1    |     0.75 |       0.86 |         8 |
| 21           |        0.75 |     0.19 |       0.3  |        16 |
| macro avg    |        0.67 |     0.64 |       0.63 |       364 |
| weighted avg |        0.73 |     0.74 |       0.71 |       364 |

### Fine-tuning procedure
This model was fine-tuned with the following key hyperparameters:

- **Number of Training Epochs**: 10
- **Batch Size**: 8
- **Learning Rate**: 5e-06
- **Early Stopping**: enabled with a patience of 2 epochs

## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.  

## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).

## Reference
Sebők, M., Máté, Á., Ring, O., Kovács, V., & Lehoczki, R. (2024). Leveraging Open Large Language Models for Multilingual Policy Topic Classification: The Babel Machine Approach. Social Science Computer Review, 0(0). https://doi.org/10.1177/08944393241259434

## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to use the model before `transformers==4.27` you need to install it manually.

If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.