license: mit
language:
- multilingual
tags:
- zero-shot-classification
- text-classification
- pytorch
metrics:
- accuracy
- f1-score
extra_gated_prompt: >-
Our models are intended for academic use only. If you are not affiliated with
an academic institution, please provide a rationale for using our models.
If you use our models for your work or research, please cite this paper:
Sebők, M., Máté, Á., Ring, O., Kovács, V., & Lehoczki, R. (2024). Leveraging
Open Large Language Models for Multilingual Policy Topic Classification: The
Babel Machine Approach. Social Science Computer Review, 0(0).
https://doi.org/10.1177/08944393241259434
extra_gated_fields:
Name: text
Country: country
Institution: text
E-mail: text
Use case: text
xlm-roberta-large-german-media-cap-v3
Model description
An xlm-roberta-large
model finetuned on multilingual training data containing texts of the media
domain labelled with major topic codes from the Comparative Agendas Project.
How to use the model
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large")
pipe = pipeline(
model="poltextlab/xlm-roberta-large-german-media-cap-v3",
task="text-classification",
tokenizer=tokenizer,
use_fast=False,
token="<your_hf_read_only_token>"
)
text = "We will place an immediate 6-month halt on the finance driven closure of beds and wards, and set up an independent audit of needs and facilities."
pipe(text)
Gated access
Due to the gated access, you must pass the token
parameter when loading the model. In earlier versions of the Transformers package, you may need to use the use_auth_token
parameter instead.
Model performance
The model was evaluated on a test set of 1203 examples (10% of the available data).
Model accuracy is 0.64.
label | precision | recall | f1-score | support |
---|---|---|---|---|
0 | 0.61 | 0.66 | 0.64 | 83 |
1 | 0.29 | 0.27 | 0.28 | 30 |
2 | 0.77 | 0.74 | 0.75 | 31 |
3 | 0.61 | 0.74 | 0.67 | 23 |
4 | 0.57 | 0.48 | 0.52 | 25 |
5 | 0.54 | 0.78 | 0.64 | 9 |
6 | 1 | 0.1 | 0.18 | 10 |
7 | 0.69 | 0.58 | 0.63 | 19 |
8 | 0.75 | 0.4 | 0.52 | 30 |
9 | 0.52 | 0.78 | 0.62 | 59 |
10 | 0.56 | 0.2 | 0.3 | 44 |
11 | 1 | 0.32 | 0.48 | 22 |
12 | 0 | 0 | 0 | 10 |
13 | 0.64 | 0.37 | 0.47 | 67 |
14 | 0.68 | 0.73 | 0.7 | 165 |
15 | 0.89 | 0.36 | 0.52 | 22 |
16 | 0 | 0 | 0 | 17 |
17 | 0.61 | 0.77 | 0.68 | 250 |
18 | 0.7 | 0.77 | 0.73 | 265 |
19 | 0 | 0 | 0 | 2 |
20 | 0.85 | 0.55 | 0.67 | 20 |
macro avg | 0.58 | 0.46 | 0.48 | 1203 |
weighted avg | 0.64 | 0.64 | 0.62 | 1203 |
Inference platform
This model is used by the CAP Babel Machine, an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the CAP Babel Machine.
Debugging and issues
This architecture uses the sentencepiece
tokenizer. In order to run the model before transformers==4.27
you need to install it manually.
If you encounter a RuntimeError
when loading the model using the from_pretrained()
method, adding ignore_mismatched_sizes=True
should solve the issue.