metadata
license: openrail++
dataset_info:
features:
- name: text
dtype: string
- name: tags
dtype: float64
splits:
- name: train
num_bytes: 2105604
num_examples: 12682
- name: validation
num_bytes: 705759
num_examples: 4227
- name: test
num_bytes: 710408
num_examples: 4214
download_size: 2073133
dataset_size: 3521771
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
Ukrainian Toxicity Dataset (Semi-natural)
This is the first of its kind toxicity classification dataset for the Ukrainian language. The datasets was obtained semi-automatically by toxic keywords filtering. For manually collected datasets with crowdsourcing, please, check textdetox/multilingual_toxicity_dataset.
Due to the subjective nature of toxicity, definitions of toxic language will vary. We include items that are commonly referred to as vulgar or profane language. (NLLB paper)
Dataset formation:
- Filtering Ukrainian tweets so that only tweets containing toxic language remain with toxic keywords. Source data: https://github.com/saganoren/ukr-twi-corpus
- Non-toxic sentences were obtained from a previous dataset of tweets as well as sentences from news and fiction from UD Ukrainian IU: https://universaldependencies.org/treebanks/uk_iu/index.html
- After that, the dataset was split into a train-test-val and all data were balanced both by the toxic/non-toxic criterion and by data source.
Labels: 0 - non-toxic, 1 - toxic.
Load dataset:
from datasets import load_dataset
dataset = load_dataset("ukr-detect/ukr-toxicity-dataset")
Citation
@inproceedings{dementieva-etal-2024-toxicity,
title = "Toxicity Classification in {U}krainian",
author = "Dementieva, Daryna and
Khylenko, Valeriia and
Babakov, Nikolay and
Groh, Georg",
editor = {Chung, Yi-Ling and
Talat, Zeerak and
Nozza, Debora and
Plaza-del-Arco, Flor Miriam and
R{\"o}ttger, Paul and
Mostafazadeh Davani, Aida and
Calabrese, Agostina},
booktitle = "Proceedings of the 8th Workshop on Online Abuse and Harms (WOAH 2024)",
month = jun,
year = "2024",
address = "Mexico City, Mexico",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.woah-1.19",
doi = "10.18653/v1/2024.woah-1.19",
pages = "244--255",
abstract = "The task of toxicity detection is still a relevant task, especially in the context of safe and fair LMs development. Nevertheless, labeled binary toxicity classification corpora are not available for all languages, which is understandable given the resource-intensive nature of the annotation process. Ukrainian, in particular, is among the languages lacking such resources. To our knowledge, there has been no existing toxicity classification corpus in Ukrainian. In this study, we aim to fill this gap by investigating cross-lingual knowledge transfer techniques and creating labeled corpora by: (i){\textasciitilde}translating from an English corpus, (ii){\textasciitilde}filtering toxic samples using keywords, and (iii){\textasciitilde}annotating with crowdsourcing. We compare LLMs prompting and other cross-lingual transfer approaches with and without fine-tuning offering insights into the most robust and efficient baselines.",
}