Datasets:
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,150 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# SOLD - A Benchmark for Sinhala Offensive Language Identification
|
2 |
+
|
3 |
+
In this repository, we introduce the {S}inhala {O}ffensive {L}anguage {D}ataset **(SOLD)** and present multiple experiments on this dataset. **SOLD** is a manually annotated dataset containing 10,000 posts from Twitter annotated as offensive and not offensive at both sentence-level and token-level. **SOLD** is the largest offensive language dataset compiled for Sinhala. We also introduce **SemiSOLD**, a larger dataset containing more than 145,000 Sinhala tweets, annotated following a semi-supervised approach.
|
4 |
+
|
5 |
+
:warning: This repository contains texts that may be offensive and harmful.
|
6 |
+
|
7 |
+
## Annotation
|
8 |
+
We use an annotation scheme split into two levels deciding (a) Offensiveness of a tweet (sentence-level) and (b) Tokens that contribute to the offence at sentence-level (token-level).
|
9 |
+
|
10 |
+
### Sentence-level
|
11 |
+
Our sentence-level offensive language detection follows level A in OLID [(Zampieri et al., 2019)](https://aclanthology.org/N19-1144/). We asked annotators to discriminate between the following types of tweets:
|
12 |
+
* **Offensive (OFF)**: Posts containing any form of non-acceptable language (profanity) or a targeted offence, which can be veiled or direct. This includes insults, threats, and posts containing profane language or swear words.
|
13 |
+
* **Not Offensive (NOT)**: Posts that do not contain offense or profanity.
|
14 |
+
|
15 |
+
Each tweet was annotated with one of the above labels, which we used as the labels in sentence-level offensive language identification.
|
16 |
+
|
17 |
+
### Token-level
|
18 |
+
To provide a human explanation of labelling, we collect rationales for the offensive language. Following HateXplain [(Mathew et al., 2021)](https://ojs.aaai.org/index.php/AAAI/article/view/17745), we define a rationale as a specific text segment that justifies the human annotator’s decision of the sentence-level labels. Therefore, We ask the annotators to highlight particular tokens in a tweet that supports their judgement about the sentence-level label (offensive, not offensive). Specifically, if a tweet is offensive, we guide the annotators to highlight tokens from the text that supports the judgement while including non-verbal expressions such as emojis and morphemes that are used to convey the intention as well. We use this as token-level offensive labels in SOLD.
|
19 |
+
|
20 |
+
|
21 |
+
![Alt text](https://github.com/Sinhala-NLP/SOLD/blob/master/images/SOLD_Annotation.png?raw=true "Annotation Process")
|
22 |
+
|
23 |
+
## Data
|
24 |
+
SOLD is released in HuggingFace. It can be loaded in to pandas dataframes using the following code.
|
25 |
+
|
26 |
+
```python
|
27 |
+
from datasets import Dataset
|
28 |
+
from datasets import load_dataset
|
29 |
+
|
30 |
+
sold_train = Dataset.to_pandas(load_dataset('sinhala-nlp/SOLD', split='train'))
|
31 |
+
sold_test = Dataset.to_pandas(load_dataset('sinhala-nlp/SOLD', split='test'))
|
32 |
+
```
|
33 |
+
The dataset contains of the following columns.
|
34 |
+
* **post_id** - Twitter ID
|
35 |
+
* **text** - Post text
|
36 |
+
* **tokens** - Tokenised text. Each token is seperated by a space.
|
37 |
+
* **rationals** - Offensive tokens. If a token is offensive it is shown as 1 and 0 otherwise.
|
38 |
+
* **label** - Sentence-level label, offensive or not-offensive.
|
39 |
+
|
40 |
+
![Alt text](https://github.com/Sinhala-NLP/SOLD/blob/master/images/SOLD_Examples.png?raw=true "Four examples from the SOLD dataset")
|
41 |
+
|
42 |
+
SemiSOLD is also released HuggingFace and can be loaded to a pandas dataframe using the following code.
|
43 |
+
|
44 |
+
```python
|
45 |
+
from datasets import Dataset
|
46 |
+
from datasets import load_dataset
|
47 |
+
|
48 |
+
semi_sold = Dataset.to_pandas(load_dataset('sinhala-nlp/SemiSOLD', split='train'))
|
49 |
+
```
|
50 |
+
The dataset contains following columns
|
51 |
+
* **post_id** - Twitter ID
|
52 |
+
* **text** - Post text
|
53 |
+
|
54 |
+
Furthermore it contains predicted offensiveness scores from nine classifiers trained on SOLD train; xlmr, xlmt, mbert, sinbert, lstm_ft, cnn_ft, lstm_cbow, cnn_cbow, lstm_sl, cnn_sl and svm
|
55 |
+
|
56 |
+
|
57 |
+
## Experiments
|
58 |
+
Clone the repository and install the libraries using the following command (preferably inside a conda environment)
|
59 |
+
|
60 |
+
~~~
|
61 |
+
pip install -r requirements.txt
|
62 |
+
~~~
|
63 |
+
|
64 |
+
### Sentence-level
|
65 |
+
Sentence-level transformer based experiments can be executed using the following command.
|
66 |
+
|
67 |
+
~~~
|
68 |
+
python -m experiments.sentence_level.sinhala_deepoffense
|
69 |
+
~~~
|
70 |
+
|
71 |
+
The command takes the following arguments;
|
72 |
+
|
73 |
+
~~~
|
74 |
+
--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).
|
75 |
+
--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.
|
76 |
+
--transfer : Whether to perform transfer learning or not (true or false).
|
77 |
+
--transfer_language : The initial language if transfer learning is performed (hi, en or si).
|
78 |
+
* hi - Perform transfer learning from HASOC 2019 Hindi dataset (Modha et al., 2019).
|
79 |
+
* en - Perform transfer learning from Offenseval English dataset (Zampieri et al., 2019).
|
80 |
+
* si - Perform transfer learning from CCMS Sinhala dataset (Rathnayake et al., 2021).
|
81 |
+
--augment : Perform semi supervised data augmentation.
|
82 |
+
--std : Standard deviation of the models to cut down data augmentation.
|
83 |
+
--augment_type: The type of the data augmentation.
|
84 |
+
* off - Augment only the offensive instances.
|
85 |
+
* normal - Augment both offensive and non-offensive instances.
|
86 |
+
~~~
|
87 |
+
|
88 |
+
Sentence-level CNN and LSTM based experiments can be executed using the following command.
|
89 |
+
|
90 |
+
~~~
|
91 |
+
python -m experiments.sentence_level.sinhala_offensive_nn
|
92 |
+
~~~
|
93 |
+
|
94 |
+
The command takes the following arguments;
|
95 |
+
|
96 |
+
~~~
|
97 |
+
--model_type : Type of the architecture (cnn2D, lstm).
|
98 |
+
--model_name : The exact word embeddings to use. This may be a gensim model, or the path to a word embeddinng files.
|
99 |
+
--augment : Perform semi supervised data augmentation.
|
100 |
+
--std : Standard deviation of the models to cut down data augmentation.
|
101 |
+
--augment_type: The type of the data augmentation.
|
102 |
+
* off - Augment only the offensive instances.
|
103 |
+
* normal - Augment both offensive and non-offensive instances.
|
104 |
+
~~~
|
105 |
+
|
106 |
+
### Token-level
|
107 |
+
Token-level transformer based experiments can be executed using the following command.
|
108 |
+
|
109 |
+
~~~
|
110 |
+
python -m experiments.sentence_level.sinhala_mudes
|
111 |
+
~~~
|
112 |
+
|
113 |
+
The command takes the following arguments;
|
114 |
+
|
115 |
+
~~~
|
116 |
+
--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).
|
117 |
+
--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.
|
118 |
+
--transfer : Whether to perform transfer learning or not (true or false).
|
119 |
+
--transfer_language : The initial language if transfer learning is performed (hatex or tsd).
|
120 |
+
* hatex - Perform transfer learning from HateXplain dataset (Mathew et al., 2021).
|
121 |
+
* tsd - Perform transfer learning from TSD dataset (Pavlopoulos et al., 2021).
|
122 |
+
~~~
|
123 |
+
|
124 |
+
Token-level LIME experiments can be executed using the following command.
|
125 |
+
|
126 |
+
~~~
|
127 |
+
python -m experiments.sentence_level.sinhala_lime
|
128 |
+
~~~
|
129 |
+
|
130 |
+
The command takes the following arguments;
|
131 |
+
|
132 |
+
~~~
|
133 |
+
--model_type : Type of the transformer model (bert, xlmroberta, roberta etc ).
|
134 |
+
--model_name : The exact architecture and trained weights to use. This may be a Hugging Face Transformers compatible pre-trained model, a community model, or the path to a directory containing model files.
|
135 |
+
~~~
|
136 |
+
|
137 |
+
|
138 |
+
## Acknowledgments
|
139 |
+
We want to acknowledge Janitha Hapuarachchi, Sachith Suraweera, Chandika Udaya Kumara and Ridmi Randima, the team of volunteer annotators that provided their free time and efforts to help us produce SOLD.
|
140 |
+
|
141 |
+
## Citation
|
142 |
+
If you are using the dataset or the models please cite the following paper
|
143 |
+
~~~
|
144 |
+
@article{ranasinghe2022sold,
|
145 |
+
title={SOLD: Sinhala Offensive Language Dataset},
|
146 |
+
author={Ranasinghe, Tharindu and Anuradha, Isuri and Premasiri, Damith and Silva, Kanishka and Hettiarachchi, Hansi and Uyangodage, Lasitha and Zampieri, Marcos},
|
147 |
+
journal={arXiv preprint arXiv:2212.00851},
|
148 |
+
year={2022}
|
149 |
+
}
|
150 |
+
~~~
|