File size: 7,821 Bytes
b19f21f
 
 
 
 
a1222d3
b19f21f
a1222d3
b19f21f
 
 
 
 
 
 
 
 
 
 
1cf017b
044c525
 
db1f3de
 
 
 
 
 
 
 
 
1cf017b
 
db1f3de
 
cb7341c
db1f3de
513f3f4
cb7341c
513f3f4
db1f3de
cb7341c
db1f3de
cb7341c
 
db1f3de
 
 
 
 
 
 
 
1cf017b
 
 
 
 
db1f3de
 
d5f0059
db1f3de
513f3f4
d5f0059
513f3f4
db1f3de
d5f0059
db1f3de
d5f0059
 
cb7341c
 
 
 
 
 
 
 
 
 
d5f0059
 
 
 
 
 
 
 
b19f21f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- ur
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
pretty_name: roman_urdu_hate_speech
tags:
- binary classification
dataset_info:
- config_name: Coarse_Grained
  features:
  - name: tweet
    dtype: string
  - name: label
    dtype:
      class_label:
        names:
          '0': Abusive/Offensive
          '1': Normal
  splits:
  - name: train
    num_bytes: 725715
    num_examples: 7208
  - name: test
    num_bytes: 202318
    num_examples: 2002
  - name: validation
    num_bytes: 79755
    num_examples: 800
  download_size: 730720
  dataset_size: 1007788
- config_name: Fine_Grained
  features:
  - name: tweet
    dtype: string
  - name: label
    dtype:
      class_label:
        names:
          '0': Abusive/Offensive
          '1': Normal
          '2': Religious Hate
          '3': Sexism
          '4': Profane/Untargeted
  splits:
  - name: train
    num_bytes: 723666
    num_examples: 7208
  - name: test
    num_bytes: 203590
    num_examples: 2002
  - name: validation
    num_bytes: 723666
    num_examples: 7208
  download_size: 1199660
  dataset_size: 1650922
configs:
- config_name: Coarse_Grained
  data_files:
  - split: train
    path: Coarse_Grained/train-*
  - split: test
    path: Coarse_Grained/test-*
  - split: validation
    path: Coarse_Grained/validation-*
  default: true
- config_name: Fine_Grained
  data_files:
  - split: train
    path: Fine_Grained/train-*
  - split: test
    path: Fine_Grained/test-*
  - split: validation
    path: Fine_Grained/validation-*
---

# Dataset Card for roman_urdu_hate_speech

## Table of Contents
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-instances)
  - [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions) 

## Dataset Description

- **Homepage:** [roman_urdu_hate_speech homepage](https://aclanthology.org/2020.emnlp-main.197/)
- **Repository:** [roman_urdu_hate_speech repository](https://github.com/haroonshakeel/roman_urdu_hate_speech)
- **Paper:** [Hate-Speech and Offensive Language Detection in Roman Urdu](https://aclanthology.org/2020.emnlp-main.197.pdf)
- **Leaderboard:** [N/A]
- **Point of Contact:** [M. Haroon Shakeel](mailto:m.shakeel@lums.edu.pk)

### Dataset Summary

The Roman Urdu Hate-Speech and Offensive Language Detection (RUHSOLD) dataset is a Roman Urdu dataset of tweets annotated by experts in the relevant language. The authors develop the gold-standard for two sub-tasks. First sub-task is based on binary labels of Hate-Offensive content and Normal content (i.e., inoffensive language). These labels are self-explanatory. The authors refer to this sub-task as coarse-grained classification. Second sub-task defines Hate-Offensive content with four labels at a granular level. These labels are the most relevant for the demographic of users who converse in RU and are defined in related literature. The authors refer to this sub-task as fine-grained classification. The objective behind creating two gold-standards is to enable the researchers to evaluate the hate speech detection approaches on both easier (coarse-grained) and challenging (fine-grained) scenarios.  

### Supported Tasks and Leaderboards

- 'multi-class-classification', 'text-classification-other-binary classification': The dataset can be used for both multi class classification as well as for binary classification as it contains both coarse grained and fine grained labels.

### Languages

The text of this dataset is Roman Urdu. The associated BCP-47 code is 'ur'.

## Dataset Structure

### Data Instances

The dataset consists of two parts divided as a set of two types, Coarse grained examples and Fine Grained examples. The difference is that in the coarse grained example the tweets are labelled as abusive or normal whereas in the fine grained version there are several classes of hate associated with a tweet.

For the Coarse grained segment of the dataset the label mapping is:-
Task 1: Coarse-grained Classification Labels
0:  Abusive/Offensive
1:  Normal

Whereas for the Fine Grained segment of the dataset the label mapping is:-
Task 2: Fine-grained Classification Labels
0:  Abusive/Offensive
1:  Normal
2:  Religious Hate
3:  Sexism
4:  Profane/Untargeted

An example from Roman Urdu Hate Speech looks as follows:
```
{
  'tweet': 'there are some yahodi daboo like imran chore zakat khore'
  'label': 0
}
```

### Data Fields

-tweet:a string denoting the tweet which has been selected by using a random sampling from a tweet base of 50000 tweets to select 10000 tweets and annotated for the dataset.

-label:An annotation manually labeled by three independent annotators, during the annotation process, all conflicts are resolved by a majority vote among three annotators. 

### Data Splits

The data of each of the segments, Coarse Grained and Fine Grained is further split into training, validation and test set. The data  is split in train, test, and validation sets with 70,20,10 split ratio using stratification based on fine-grained labels.

The use of stratified sampling is deemed necessary to preserve the same labels ratio across all splits. 

The Final split sizes are as follows:

Train Valid Test
7209 2003 801


## Dataset Creation

### Curation Rationale

[More Information Needed]

### Source Data

#### Initial Data Collection and Normalization

[More Information Needed]

#### Who are the source language producers?

[More Information Needed]

### Annotations

#### Annotation process

[More Information Needed]

#### Who are the annotators?

[More Information Needed]

### Personal and Sensitive Information

[More Information Needed]

## Considerations for Using the Data

### Social Impact of Dataset

[More Information Needed]

### Discussion of Biases

[More Information Needed]

### Other Known Limitations

[More Information Needed]

## Additional Information

### Dataset Curators

The dataset was created by Hammad Rizwan, Muhammad Haroon Shakeel, Asim Karim during work done at Department of Computer Science, Lahore University of Management Sciences (LUMS), Lahore, Pakistan.

### Licensing Information

The licensing status of the dataset hinges on the legal status of the [Roman Urdu Hate Speech Dataset Repository](https://github.com/haroonshakeel/roman_urdu_hate_speech) which is under MIT License.

### Citation Information

```bibtex
@inproceedings{rizwan2020hate,
  title={Hate-speech and offensive language detection in roman Urdu},
  author={Rizwan, Hammad and Shakeel, Muhammad Haroon and Karim, Asim},
  booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
  pages={2512--2522},
  year={2020}
}
```

### Contributions

Thanks to [@bp-high](https://github.com/bp-high), for adding this dataset.