File size: 2,708 Bytes
62cf905
 
ff94728
 
 
b5f1426
 
 
7e66913
 
62cf905
ff94728
49f593c
ff94728
 
 
49f593c
ff94728
 
 
 
 
3e8908b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b0757e6
 
 
 
 
df92262
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b0757e6
3e8908b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
---
license: openrail++
language:
- uk
widget:
- text: Ти неймовірна!
datasets:
- ukr-detect/ukr-toxicity-dataset
base_model:
- FacebookAI/xlm-roberta-base
---

## Binary toxicity classifier for Ukrainian

This is the fine-tuned on the downstream task ["xlm-roberta-base"](https://huggingface.co/xlm-roberta-base) instance.

The evaluation metrics for binary toxicity classification are: 

**Precision**: 0.9130
**Recall**: 0.9065
**F1**: 0.9061

The training and evaluation data will be clarified later.

## How to use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification

# load tokenizer and model weights
tokenizer = AutoTokenizer.from_pretrained('dardem/xlm-roberta-base-uk-toxicity')
model = AutoModelForSequenceClassification.from_pretrained('dardem/xlm-roberta-base-uk-toxicity')

# prepare the input
batch = tokenizer.encode('Ти неймовірна!', return_tensors='pt')

# inference
model(batch)
```

## Citation

```
@inproceedings{dementieva-etal-2024-toxicity,
    title = "Toxicity Classification in {U}krainian",
    author = "Dementieva, Daryna  and
      Khylenko, Valeriia  and
      Babakov, Nikolay  and
      Groh, Georg",
    editor = {Chung, Yi-Ling  and
      Talat, Zeerak  and
      Nozza, Debora  and
      Plaza-del-Arco, Flor Miriam  and
      R{\"o}ttger, Paul  and
      Mostafazadeh Davani, Aida  and
      Calabrese, Agostina},
    booktitle = "Proceedings of the 8th Workshop on Online Abuse and Harms (WOAH 2024)",
    month = jun,
    year = "2024",
    address = "Mexico City, Mexico",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.woah-1.19",
    doi = "10.18653/v1/2024.woah-1.19",
    pages = "244--255",
    abstract = "The task of toxicity detection is still a relevant task, especially in the context of safe and fair LMs development. Nevertheless, labeled binary toxicity classification corpora are not available for all languages, which is understandable given the resource-intensive nature of the annotation process. Ukrainian, in particular, is among the languages lacking such resources. To our knowledge, there has been no existing toxicity classification corpus in Ukrainian. In this study, we aim to fill this gap by investigating cross-lingual knowledge transfer techniques and creating labeled corpora by: (i){\textasciitilde}translating from an English corpus, (ii){\textasciitilde}filtering toxic samples using keywords, and (iii){\textasciitilde}annotating with crowdsourcing. We compare LLMs prompting and other cross-lingual transfer approaches with and without fine-tuning offering insights into the most robust and efficient baselines.",
}
```