poltextlab
commited on
Commit
•
595d1f4
1
Parent(s):
5dab44f
Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,131 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
---
|
3 |
+
---
|
4 |
+
license: mit
|
5 |
+
language:
|
6 |
+
- de
|
7 |
+
tags:
|
8 |
+
- zero-shot-classification
|
9 |
+
- text-classification
|
10 |
+
- pytorch
|
11 |
+
metrics:
|
12 |
+
- accuracy
|
13 |
+
- f1-score
|
14 |
+
---
|
15 |
+
# poltextlab/xlm-roberta-large-german-cap-v3
|
16 |
+
## Model description
|
17 |
+
An `xlm-roberta-large` model finetuned on german training data labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
|
18 |
+
|
19 |
+
## How to use the model
|
20 |
+
#### Loading and tokenizing input data
|
21 |
+
```python
|
22 |
+
import pandas as pd
|
23 |
+
import numpy as np
|
24 |
+
from datasets import Dataset
|
25 |
+
from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
|
26 |
+
Trainer, TrainingArguments)
|
27 |
+
|
28 |
+
CAP_NUM_DICT = {0: '1', 1: '2', 2: '3', 3: '4', 4: '5', 5: '6',
|
29 |
+
6: '7', 7: '8', 8: '9', 9: '10', 10: '12', 11: '13', 12: '14',
|
30 |
+
13: '15', 14: '16', 15: '17', 16: '18', 17: '19', 18: '20', 19:
|
31 |
+
'21', 20: '23', 21: '999'}
|
32 |
+
|
33 |
+
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
|
34 |
+
num_labels = len(CAP_NUM_DICT)
|
35 |
+
|
36 |
+
def tokenize_dataset(data : pd.DataFrame):
|
37 |
+
tokenized = tokenizer(data["text"],
|
38 |
+
max_length=MAXLEN,
|
39 |
+
truncation=True,
|
40 |
+
padding="max_length")
|
41 |
+
return tokenized
|
42 |
+
|
43 |
+
hg_data = Dataset.from_pandas(data)
|
44 |
+
dataset = hg_data.map(tokenize_dataset, batched=True, remove_columns=hg_data.column_names)
|
45 |
+
```
|
46 |
+
|
47 |
+
#### Inference using the Trainer class
|
48 |
+
```python
|
49 |
+
model = AutoModelForSequenceClassification.from_pretrained('poltextlab/poltextlab/xlm-roberta-large-german-cap-v3',
|
50 |
+
num_labels=22,
|
51 |
+
problem_type="multi_label_classification",
|
52 |
+
ignore_mismatched_sizes=True
|
53 |
+
)
|
54 |
+
|
55 |
+
training_args = TrainingArguments(
|
56 |
+
output_dir='.',
|
57 |
+
per_device_train_batch_size=8,
|
58 |
+
per_device_eval_batch_size=8
|
59 |
+
)
|
60 |
+
|
61 |
+
trainer = Trainer(
|
62 |
+
model=model,
|
63 |
+
args=training_args
|
64 |
+
)
|
65 |
+
|
66 |
+
probs = trainer.predict(test_dataset=dataset).predictions
|
67 |
+
predicted = pd.DataFrame(np.argmax(probs, axis=1)).replace({0: CAP_NUM_DICT}).rename(
|
68 |
+
columns={0: 'predicted'}).reset_index(drop=True)
|
69 |
+
|
70 |
+
```
|
71 |
+
|
72 |
+
### Fine-tuning procedure
|
73 |
+
`poltextlab/xlm-roberta-large-german-cap-v3` was fine-tuned using the Hugging Face Trainer class with the following hyperparameters:
|
74 |
+
```python
|
75 |
+
training_args = TrainingArguments(
|
76 |
+
output_dir=f"../model/{model_dir}/tmp/",
|
77 |
+
logging_dir=f"../logs/{model_dir}/",
|
78 |
+
logging_strategy='epoch',
|
79 |
+
num_train_epochs=10,
|
80 |
+
per_device_train_batch_size=8,
|
81 |
+
per_device_eval_batch_size=8,
|
82 |
+
learning_rate=5e-06,
|
83 |
+
seed=42,
|
84 |
+
save_strategy='epoch',
|
85 |
+
evaluation_strategy='epoch',
|
86 |
+
save_total_limit=1,
|
87 |
+
load_best_model_at_end=True
|
88 |
+
)
|
89 |
+
```
|
90 |
+
We also incorporated an EarlyStoppingCallback in the process with a patience of 2 epochs.
|
91 |
+
|
92 |
+
## Model performance
|
93 |
+
The model was evaluated on a test set of 6309 examples (10% of the available data).<br>
|
94 |
+
Model accuracy is **0.69**.
|
95 |
+
| label | precision | recall | f1-score | support |
|
96 |
+
|:-------------|------------:|---------:|-----------:|----------:|
|
97 |
+
| 0 | 0.65 | 0.6 | 0.62 | 621 |
|
98 |
+
| 1 | 0.71 | 0.68 | 0.69 | 473 |
|
99 |
+
| 2 | 0.79 | 0.73 | 0.76 | 247 |
|
100 |
+
| 3 | 0.77 | 0.71 | 0.74 | 156 |
|
101 |
+
| 4 | 0.68 | 0.58 | 0.63 | 383 |
|
102 |
+
| 5 | 0.79 | 0.82 | 0.8 | 351 |
|
103 |
+
| 6 | 0.71 | 0.78 | 0.74 | 329 |
|
104 |
+
| 7 | 0.81 | 0.79 | 0.8 | 216 |
|
105 |
+
| 8 | 0.78 | 0.75 | 0.76 | 157 |
|
106 |
+
| 9 | 0.87 | 0.78 | 0.83 | 272 |
|
107 |
+
| 10 | 0.61 | 0.68 | 0.64 | 315 |
|
108 |
+
| 11 | 0.61 | 0.74 | 0.67 | 487 |
|
109 |
+
| 12 | 0.72 | 0.7 | 0.71 | 145 |
|
110 |
+
| 13 | 0.69 | 0.6 | 0.64 | 346 |
|
111 |
+
| 14 | 0.75 | 0.69 | 0.72 | 359 |
|
112 |
+
| 15 | 0.69 | 0.65 | 0.67 | 189 |
|
113 |
+
| 16 | 0.36 | 0.47 | 0.41 | 55 |
|
114 |
+
| 17 | 0.68 | 0.73 | 0.71 | 618 |
|
115 |
+
| 18 | 0.61 | 0.68 | 0.64 | 469 |
|
116 |
+
| 19 | 0 | 0 | 0 | 18 |
|
117 |
+
| 20 | 0.73 | 0.75 | 0.74 | 102 |
|
118 |
+
| 21 | 0 | 0 | 0 | 1 |
|
119 |
+
| macro avg | 0.64 | 0.63 | 0.63 | 6309 |
|
120 |
+
| weighted avg | 0.7 | 0.69 | 0.69 | 6309 |
|
121 |
+
|
122 |
+
## Inference platform
|
123 |
+
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
|
124 |
+
|
125 |
+
## Cooperation
|
126 |
+
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
|
127 |
+
|
128 |
+
## Debugging and issues
|
129 |
+
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.
|
130 |
+
|
131 |
+
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.
|