File size: 3,441 Bytes
b4daf70
5c31414
067cbaa
 
 
6e798b0
067cbaa
 
321fade
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59f34ff
321fade
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10710d2
e6bb7d0
321fade
 
2beed9c
b4daf70
067cbaa
928df84
d46c1ba
 
 
97ce730
067cbaa
2697bed
81606aa
 
2697bed
81606aa
 
a453214
11b8e29
11331d7
91c8463
 
11331d7
 
 
 
 
 
91c8463
11331d7
 
7dc3145
321fade
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59f34ff
 
7dc3145
 
 
51685eb
406ed73
7dc3145
59f34ff
 
 
406ed73
59f34ff
321fade
 
51685eb
406ed73
51685eb
 
a453214
406ed73
51685eb
321fade
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
---
license: apache-2.0
task_categories:
- text-classification
- token-classification
- zero-shot-classification
size_categories:
- 1M<n<10M
language:
- ar
- es
- pa
- th
- et
- fr
- fi
- hu
- lt
- ur
- so
- pl
- el
- mr
- sk
- gu
- he
- af
- te
- ro
- lv
- sv
- ne
- kn
- it
- mk
- cs
- en
- de
- da
- ta
- bn
- pt
- sq
- tl
- uk
- bg
- ca
- sw
- hi
- zh
- ja
- hr
- ru
- vi
- id
- sl
- cy
- ko
- nl
- ml
- tr
- fa
- 'no'
- multilingual
tags:
- nlp
- moderation
---

**I have decided to release the auto-moderation models all at once sometime in July/August, 2023. The curated datasets for training these models will be avaliable first.**

<br>

This is a large multilingual toxicity dataset with nearly 3M rows of text data from 55 natural languages, all of which are written/sent by humans, not machine translation models.

The preprocessed training data alone consists of 2,880,667 rows of comments, tweets, and messages. Among these rows, 416,529 are classified as toxic, while the remaining 2,463,773 are considered neutral. Below is a table to illustrate the data composition:
|       |   Toxic  |  Neutral |   Total  |
|-------|----------|----------|----------|
| [multilingual-train-deduplicated.csv](./multilingual-train-deduplicated.csv) |   416,529   |   2,464,138   |   2,880,667   |
| [multilingual-validation.csv](./multilingual-validation.csv) |   1,230   |   6,770   |   8,000   |
| [multilingual-test.csv](./multilingual-test.csv) |   14,410   |   49,402   |   63,812   |
Each CSV file has three columns: `text`, `is_toxic`, and `lang`.

Supported types of toxicity:
- Identity Hate/Homophobia
- Hate Speech
- Serious Insults
- Obscene
- Threats
- Harassment
- Racism
- Trolling
- Doxing
- Others

Supported languages:
- Afrikaans
- Albanian
- Arabic
- Bengali
- Bulgarian
- Catalan
- Chinese (Simplified)
- Chinese (Traditional)
- Croatian
- Czech
- Danish
- Dutch
- English
- Estonian
- Finnish
- French
- German
- Greek
- Gujarati
- Hebrew
- Hindi
- Hungarian
- Indonesian
- Italian
- Japanese
- Kannada
- Korean
- Latvian
- Lithuanian
- Macedonian
- Malayalam
- Marathi
- Nepali
- Norwegian
- Persian
- Polish
- Portuguese
- Punjabi
- Romanian
- Russian
- Slovak
- Slovenian
- Somali
- Spanish
- Swahili
- Swedish
- Tagalog
- Tamil
- Telugu
- Thai
- Turkish
- Ukrainian
- Urdu
- Vietnamese
- Welsh


<br>


### Original Source?
Around 11 months ago, I downloaded and preprocessed 2.7M rows of text data, but completely forgot the original source of these datasets...
All I remember is that I downloaded datasets from everywhere I could: HuggingFace, research papers, GitHub, Kaggle, SurgeAI, and Google search. I even fetched 20K+ tweets using the Twitter API.
Recently, I came across two newer HuggingFace datasets, so I remembered to credit them below.

Known datasets:
- tomekkorbak/pile-toxicity-balanced2
- datasets/thai_toxicity_tweet
- inspection-ai/japanese-toxic-dataset (GitHub)

<br>

### Limitations
Limitations include:
- All labels were rounded to the nearest integer. If a text was classified as 46%-54% toxic, the text itself might not be noticeably toxic or neutral.
- There were disagreements among moderators on some labels, due to ambiguity and lack of context.
- When there're only URL(s), emojis, or anything that's unrecognizable as natural language in the "text" column, the corresponding "lang" is "unkown".
- The validation data is not representative of the training data.

Have fun modelling!