ACL-OCL / Base_JSON /prefixO /json /osact /2020.osact-1.17.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:06:25.494070Z"
},
"title": "Leveraging Affective Bidirectional Transformers for Offensive Language Detection",
"authors": [
{
"first": "Abdelrahim",
"middle": [],
"last": "Elmadany",
"suffix": "",
"affiliation": {
"laboratory": "Natural Language Processing Lab University of British Columbia",
"institution": "",
"location": {}
},
"email": "a.elmadany@ubc.ca"
},
{
"first": "Chiyu",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "Natural Language Processing Lab University of British Columbia",
"institution": "",
"location": {}
},
"email": "chiyuzh@mail.ubc.ca"
},
{
"first": "Muhammad",
"middle": [],
"last": "Abdul-Mageed",
"suffix": "",
"affiliation": {
"laboratory": "Natural Language Processing Lab University of British Columbia",
"institution": "",
"location": {}
},
"email": "muhammad.mageeed@ubc.ca"
},
{
"first": "Azadeh",
"middle": [],
"last": "Hashemi",
"suffix": "",
"affiliation": {
"laboratory": "Natural Language Processing Lab University of British Columbia",
"institution": "",
"location": {}
},
"email": "azadeh.hashemi@ubc.ca"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Social media are pervasive in our life, making it necessary to ensure safe online experiences by detecting and removing offensive and hate speech. In this work, we report our submission to the Offensive Language and hate-speech Detection shared task organized with the 4 th Workshop on Open-Source Arabic Corpora and Processing Tools Arabic (OSACT4). We focus on developing purely deep learning systems, without a need for feature engineering. For that purpose, we develop an effective method for automatic data augmentation and show the utility of training both offensive and hate speech models off (i.e., by fine-tuning) previously trained affective models (i.e., sentiment and emotion). Our best models are significantly better than a vanilla BERT model, with 89.60% acc (82.31% macro F1) for hate speech and 95.20% acc (70.51% macro F1) on official TEST data.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Social media are pervasive in our life, making it necessary to ensure safe online experiences by detecting and removing offensive and hate speech. In this work, we report our submission to the Offensive Language and hate-speech Detection shared task organized with the 4 th Workshop on Open-Source Arabic Corpora and Processing Tools Arabic (OSACT4). We focus on developing purely deep learning systems, without a need for feature engineering. For that purpose, we develop an effective method for automatic data augmentation and show the utility of training both offensive and hate speech models off (i.e., by fine-tuning) previously trained affective models (i.e., sentiment and emotion). Our best models are significantly better than a vanilla BERT model, with 89.60% acc (82.31% macro F1) for hate speech and 95.20% acc (70.51% macro F1) on official TEST data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Social media are widely used at a global scale. Communication between users from different backgrounds, ideologies, preferences, political orientations, etc. on these platforms can result in tensions and use of offensive and hateful speech. This negative content can be very harmful, sometimes with real-world consequences. For these reasons, it is desirable to control this type of uncivil language behavior by detecting and removing this destructive content.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Although there have been a number of works on detecting offensive and hateful content in English (e.g. (Agrawal and Awekar, 2018; Badjatiya et al., 2017; Nobata et al., 2016) ), works on many other languages are either lacking or rare. This is the case for Arabic, where there have been only very few works (e.g., (Alakrot et al., 2018; Albadi et al., 2018; Mubarak et al., 2017; Mubarak and Darwish, 2019) ). For these motivations, we participated in the Offensive Language and hate-speech Detection shared task organized with the 4 th Workshop on Open-Source Arabic Corpora and Processing Tools Arabic (OSACT4).",
"cite_spans": [
{
"start": 103,
"end": 129,
"text": "(Agrawal and Awekar, 2018;",
"ref_id": "BIBREF3"
},
{
"start": 130,
"end": 153,
"text": "Badjatiya et al., 2017;",
"ref_id": "BIBREF6"
},
{
"start": 154,
"end": 174,
"text": "Nobata et al., 2016)",
"ref_id": "BIBREF22"
},
{
"start": 314,
"end": 336,
"text": "(Alakrot et al., 2018;",
"ref_id": "BIBREF4"
},
{
"start": 337,
"end": 357,
"text": "Albadi et al., 2018;",
"ref_id": "BIBREF5"
},
{
"start": 358,
"end": 379,
"text": "Mubarak et al., 2017;",
"ref_id": "BIBREF21"
},
{
"start": 380,
"end": 406,
"text": "Mubarak and Darwish, 2019)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Offensive content and hate speech are less frequent online than civil, acceptable communication. For example, only 19% and \u223c 5% of the released shared task data are offensive and hate speech, respectively. This is the case in spite of the fact that the data seems to have been collected based on trigger seeds that are more likely to accompany this type of harmful content. As such, it is not easy to acquire data for training machine learning systems. For this reason, we direct part of our efforts to automatically augmenting training data released by the shared task organizers (Section 3.1.). Our experiments show the utility of our data enrichment method. In addition, we hypothesize trained affective models can have useful representations that might be effective for the purpose of detecting offensive and hateful content. To test this hypothesis, we fine-tune one sentiment analysis model and one emotion detection model on our training data. Our experiments support our hypothesis (Section 4.). All our models are based on the Bidirectional Encoder from Transformers (BERT) model. Our best models are significantly better than competitive baseline based on vanilla BERT. Our contributions can be summarized as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "\u2022 We present an effective method for automatically augmenting training data. Our method is simple and yields sizable additional data when we run it on a large in-house collection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "\u2022 We demonstrate the utility of fine-tuning off-the-shelf affective models on the two downstream tasks of offensive and hate speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "\u2022 We develop highly accurate deep learning models for the two tasks of offensive content and hate speech detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The rest of the paper is organized as follows: We introduce related works in Section 2., shared task data and our datasets in Section 3., our models in Section 4., and we conclude in Section 5..",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Thematic Focus: Research on undesirable content shows that social media users sometimes utilize profane, obscene, or offensive language (Jay and Janschewitz, 2008; Wiegand et al., 2018) ; aggression (Kumar et al., 2018; Modha et al., 2018) ; toxic content (Georgakopoulos et al., 2018; Fortuna et al., 2018; Zampieri et al., 2019) , and bullying (Dadvar et al., 2013; Agrawal and Awekar, 2018; Fortuna et al., 2018) .",
"cite_spans": [
{
"start": 136,
"end": 163,
"text": "(Jay and Janschewitz, 2008;",
"ref_id": "BIBREF15"
},
{
"start": 164,
"end": 185,
"text": "Wiegand et al., 2018)",
"ref_id": "BIBREF24"
},
{
"start": 199,
"end": 219,
"text": "(Kumar et al., 2018;",
"ref_id": "BIBREF16"
},
{
"start": 220,
"end": 239,
"text": "Modha et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 256,
"end": 285,
"text": "(Georgakopoulos et al., 2018;",
"ref_id": "BIBREF14"
},
{
"start": 286,
"end": 307,
"text": "Fortuna et al., 2018;",
"ref_id": "BIBREF13"
},
{
"start": 308,
"end": 330,
"text": "Zampieri et al., 2019)",
"ref_id": "BIBREF25"
},
{
"start": 346,
"end": 367,
"text": "(Dadvar et al., 2013;",
"ref_id": "BIBREF10"
},
{
"start": 368,
"end": 393,
"text": "Agrawal and Awekar, 2018;",
"ref_id": "BIBREF3"
},
{
"start": 394,
"end": 415,
"text": "Fortuna et al., 2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "Overarching Applications: Several works have taken as their target detecting these types of negative content with a goal to build applications for (1) content filtering or (2) quantifying the intensity of polarization (Barber\u00e1 and Sood, 2015; Conover et al., 2011) , (3) classifying trolls and propaganda accounts that often use offensive language (Darwish et al., 2017) , (4) identifying hate speech that may correlate with hate crimes (Nobata et al., 2016) , and 5 (Chadefaux, 2014) .",
"cite_spans": [
{
"start": 218,
"end": 242,
"text": "(Barber\u00e1 and Sood, 2015;",
"ref_id": "BIBREF7"
},
{
"start": 243,
"end": 264,
"text": "Conover et al., 2011)",
"ref_id": "BIBREF9"
},
{
"start": 348,
"end": 370,
"text": "(Darwish et al., 2017)",
"ref_id": "BIBREF11"
},
{
"start": 437,
"end": 458,
"text": "(Nobata et al., 2016)",
"ref_id": "BIBREF22"
},
{
"start": 467,
"end": 484,
"text": "(Chadefaux, 2014)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "Methods: A manual way for detecting negative language can involve building a list of offensive words and then filtering text based on these words. As Mubarak and Darwish (2019) also point out, this approach is limited because (1) offensive words are ever evolving with new words continuously emerging, complicating the maintenance of such lists and (2) the offensiveness of certain words is highly context-and genre-dependent and hence a lexicon-based approach will not be very precise. Machine learning approaches, as such, are much more desirable since they are more nuanced to domain and also usually render more accurate, context-sensitive predictions. This is especially the case if there are enough data to train these systems.",
"cite_spans": [
{
"start": 150,
"end": 176,
"text": "Mubarak and Darwish (2019)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "Most work based on machine learning employs a supervised approach at either (1) character level (Malmasi and Zampieri, 2017) , (2) word level (Kwok and Wang, 2013) , or (3) simply employ some representation incorporating word embeddings (Malmasi and Zampieri, 2017) . These studies use different learning methods, including Naive Bayes (Kwok and Wang, 2013) , SVMs (Malmasi and Zampieri, 2017) , and classical deep learning such as CNNs and RNNs (Nobata et al., 2016; Badjatiya et al., 2017; Alakrot et al., 2018; Agrawal and Awekar, 2018) . Accuracy of the aforementioned systems range between 76% and 90%. It is also worth noting that some earlier works (Weber et al., 2013 ) use sentiment words as features to augment other contextual features. Our work has affinity to this last category since we also leverage affective models trained on sentiment or emotion tasks. Our approach, however, differs in that we build models free of hand-crafted features. In other words, we let the model learn its representation based on training data. This is a characteristic attribute of deep learning models in general. 1 In terms of the specific information encoded in classifiers, researchers use profile information in addition to text-based features. For example, Abozinadah (2017) apply SVMs on 31 features extracted from user profiles in addition to social graph centrality measures.",
"cite_spans": [
{
"start": 96,
"end": 124,
"text": "(Malmasi and Zampieri, 2017)",
"ref_id": "BIBREF18"
},
{
"start": 142,
"end": 163,
"text": "(Kwok and Wang, 2013)",
"ref_id": "BIBREF17"
},
{
"start": 237,
"end": 265,
"text": "(Malmasi and Zampieri, 2017)",
"ref_id": "BIBREF18"
},
{
"start": 336,
"end": 357,
"text": "(Kwok and Wang, 2013)",
"ref_id": "BIBREF17"
},
{
"start": 365,
"end": 393,
"text": "(Malmasi and Zampieri, 2017)",
"ref_id": "BIBREF18"
},
{
"start": 446,
"end": 467,
"text": "(Nobata et al., 2016;",
"ref_id": "BIBREF22"
},
{
"start": 468,
"end": 491,
"text": "Badjatiya et al., 2017;",
"ref_id": "BIBREF6"
},
{
"start": 492,
"end": 513,
"text": "Alakrot et al., 2018;",
"ref_id": "BIBREF4"
},
{
"start": 514,
"end": 539,
"text": "Agrawal and Awekar, 2018)",
"ref_id": "BIBREF3"
},
{
"start": 656,
"end": 675,
"text": "(Weber et al., 2013",
"ref_id": "BIBREF23"
},
{
"start": 1110,
"end": 1111,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "Methodologically, our work differs in three ways: (1) we train offensive and hate speech models off affective models (i.e., we fine-tune already trained sentiment and emotion models on both the offensive and hate speech tasks). 2We apply BERT language models on these two tasks. We also (3) automatically augment offensive and hate speech training data using a simple data enrichment method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "Arabic Offensive Content: Very few works have been applied to the Arabic language, focusing on detecting offensive language. For example, (Mubarak et al., 2017) develop a list of obscene words and hashtags using patterns common in offensive and rude communications to label a dataset of 1,100 tweets. Mubarak and Darwish (2019) applied character n-gram FasText model on a large dataset (3.3M tweets) of offensive content. Our work is similar to Mubarak and Darwish (2019) in that we also automatically augment training data based on an initial seed lexicon.",
"cite_spans": [
{
"start": 138,
"end": 160,
"text": "(Mubarak et al., 2017)",
"ref_id": "BIBREF21"
},
{
"start": 301,
"end": 327,
"text": "Mubarak and Darwish (2019)",
"ref_id": "BIBREF20"
},
{
"start": 445,
"end": 471,
"text": "Mubarak and Darwish (2019)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "In our experiments, we use two types of data: (1) data distributed by the Offensive Language Detection shared task and (2) an automatically collected dataset that we develop (Section 3.1.). The shared task dataset comprises 10,000 tweets manually annotated for two sub-tasks: offensiveness (Sub task A) 2 and hate speech (Sub task B) 3 . According to shared task organizers, 4 , offensive tweets in the data contain explicit or implicit insults or attacks against other people, or inappropriate language. Organizers also maintain that hate speech tweets contains insults or threats targeting a specific group of people based on the nationality, ethnicity, gender, political or sport affiliation, religious belief, or other common characteristics of such a group. The dataset is split by shared task organizers into 70% TRAIN, 10% DEV, and 20% TEST. Both labeled TRAIN and DEV splits were shared with participating teams, while tweets of TEST data (without labels) was only released briefly before competition deadline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3."
},
{
"text": "It is noteworthy that the dataset is imbalanced. For offensiveness (Sub task A), only 20% of the TRAIN split are labeled as offensive and the rest is not offensive. For hate speech (Sub task B), only 5% of the tweets are annotated as hateful. Due to this imbalanced, the official evaluation metric is macro F 1 score. Table 1 shows the size and label distribution in the shared task data splits 2)",
"cite_spans": [],
"ref_spans": [
{
"start": 318,
"end": 325,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3."
},
{
"text": "Hey, you Lebanese guy, you're the wastes of the French colonizers. The Lebanese in the Gulf put their women in prostitution work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3."
},
{
"text": "Examples for offensive but not hate-speech tweets:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3."
},
{
"text": "3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3."
},
{
"text": "Oh my lord... Thank God she has disability. What would have happened if she were not disabled?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3."
},
{
"text": "I wonder what you, and this little pitch by your side, are hiding for us, John Snow?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4)",
"sec_num": null
},
{
"text": "Examples for not offensive and not hate-speech tweets:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4)",
"sec_num": null
},
{
"text": "Either I become the most important in your life, or I become nothing at all.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "5)",
"sec_num": null
},
{
"text": "Wow! How wonderful this food is, Sumaia! You're such a honey, beauty, sweetie, and good cook! You're are artist! You're everything!",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6)",
"sec_num": null
},
{
"text": "As explained earlier, the positive class in the offensive sub-task (i.e., the category 'offensive') is only 20% and in the hateful sub-task (i.e., the class 'hateful') it is only 5%. Since our goal is to develop exclusively deep learning matically extracted dataset, to which we refer to as augmented (AUG).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "3.1."
},
{
"text": "6 Original tweets can be run-on sentences, lack proper grammatical structures or punctuation. In presented translation, for readability, while we maintain the meaning as much as possible, we render grammatical, well-structured sentence. models, we needed to extend our training data such that we increase the positive samples. For this reason, we develop a simple method to automatically augment our training data. Our method first depends on extracting tweets that contain any of a seed lexicon (explained below) and satisfy a predicted sentiment label condition. We hypothesize that both offensive and hateful content would carry negative sentiment and so it would be intuitive to restrict any automatically extracted tweets to those that carry these negative sentiment labels. To further test this hypothesis, we analyzing the distribution of the sentiment classes in the TRAIN split using an off-the-shelf tool, AraNet (Abdul-Mageed et al., 2020). As shown in Figure 3 , AraNet assigns sensible sentiment labels to the data. For the 'offensive' class, the tool assigns 65% negative sentiment tags and for the non-offensive class it assigns only 60% positive sentiment labels. 7 For the hate speech data, we find that AraNet assigns 72% negative labels to the 'hateful' class and 55% positive sentiment labels for the 'non-hateful' class. Based on this analysis, we decide to impose a sentiment-label condition on the automatically extended data as explained earlier. In other words, we only choose 'offensive' and 'hateful' class data from tweets predicted as negative sentiment. Similarly, we only choose 'non-offensive' and 'non-hateful' tweets assigned positive sentiment labels by AraNet. We now explain how we extend the dataset. We now explain our approach to extract tweets with an offensive and hateful seed lexicon.",
"cite_spans": [],
"ref_spans": [
{
"start": 964,
"end": 973,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "3.1."
},
{
"text": "To generate a seed lexicon, we extract all words that follow the Ya (Oh, you) in the shared task TRAIN split positive class in the two sub-tasks (i.e., 'offensive' and 'hateful'). The intuition here is that the word Ya acts as a trigger word that is likely to be followed by negative lexica. This gives us a set of 2,158. We find that this set can have words that are neither offensive nor hateful outside context and so we manually select a smaller set of 352 words that we believe are much more likely to be effective offensive seeds and only 38 words that we judge as more suitable carriers of hateful content. Table 2 shows samples of the offensive and hateful seeds. Table 3 shows examples of seeds in our initial larger set that we filtered out since these are less likely to carry negative meaning (whether offensive or hateful).",
"cite_spans": [],
"ref_spans": [
{
"start": 614,
"end": 621,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 672,
"end": 679,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "3.1."
},
{
"text": "To extend the offensive and hateful tweets, we use 500K randomly sampled, unlabeled, tweets from (Abdul-Mageed et al., 2019) that each have at least one occurrence of the trigger word Ya and at least one occurrence of a word from either of our two seed lexica (i.e., the offensive and hateful seeds). 8 We then apply AraNet (Abdul-Mageed et al., 2020) For reference, the majority (%=67) of the collection extracted with our seed lexicon are assigned negative sentiment labels by AraNet. This reflects the effectiveness of our lexicon as it matches our observations about the distribution of sentiment labels in the shared task TRAIN split.",
"cite_spans": [
{
"start": 301,
"end": 302,
"text": "8",
"ref_id": null
},
{
"start": 324,
"end": 351,
"text": "(Abdul-Mageed et al., 2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "3.1."
},
{
"text": "To add positive class data (i.e., 'not-offensive' and 'nothateful') to this augmented collection, we randomly sample another 500K tweets that carry Ya from (Abdul-Mageed et al., 2019) that do not carry any of the two offensive and hateful seed lexica. We apply AraNet on these tweets and keep only tweets assigned a positive sentiment label (%=70). We use 215,365 tweets as 'non-offensive' but only 199,291 as 'non-hateful'. 9 Table 1 shows the size and distribution of class labels in our extended dataset. Figure 2 and Figure 1 are word clouds of unigrams in our extended training data (offensive and hateful speech, respectively) after we remove our seed lexica from the data. The clouds show that the data carries lexical cues likely to occur in each of the two classes (offensive and hateful). Examples of frequent words in the offensive class include dog, animal, son of, mother, dirty woman, monster, mad, and on you. Examples in the hateful data include shut up, dogs, son of, animal, dog, haha, and for this reason. We note that the hateful words do not include direct names of groups since these were primarily our seeds that we removed before we prepare the word cloud. Overall, the clouds provide sensible cues of our phenomena of interest across the two tasks. ",
"cite_spans": [],
"ref_spans": [
{
"start": 427,
"end": 434,
"text": "Table 1",
"ref_id": null
},
{
"start": 508,
"end": 516,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 521,
"end": 529,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "3.1."
},
{
"text": "We perform light Twitter-specific data cleaning (e.g., replacing numbers, usernames, hashtags, and hyperlinks by unique tokens NUM, USER, HASH, and URL respectively). We also perform Arabic-specific normalization (e.g., removing diacritics and mapping various forms of Alef and Yeh each to a canonical form). For text tokenization, we use byte-pair encoding (PBE) as implemented in Multilingual Cased BERT model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Pre-Processing",
"sec_num": "4.1."
},
{
"text": "Our experiments are based on BERT-Base Multilingual Cased model released by (Devlin et al., 2018) 10 . BERT stands for Bidirectional Encoder Representations from Transformers. It is an approach for language modeling that involves two self-supervised learning tasks, (1) masked language models (MLM) and (2) next sentence predication (NSP). BERT is equipped with an Encoder architecture which naturally conditions on bi-directional context. It randomly masks a given percentage of input tokens and attempts to predict these masked tokens. (Devlin et al., 2018 ) mask 15% of the tokens (the authors use 10 https://github.com/google-research/bert/ blob/master/multilingual.md. word pieces) and use the hidden states of these masked tokens from last layer for prediction. To understand the relationship between two sentences, the BERT also pre-trains with a binarized NSP task, which is also a type of self-supervises learning. For the sentence pairs (e.g., A-B) in pre-training examples, 50% of the time B is the actual next sentence that follows A in the corpus (positive class) and 50% of the time B is a random sentence from corpus (negative class). Google's pre-trained BERT-Base Multilingual Cased model is trained on 104 languages (including Arabic) with 12 layers, 768 hidden units each, 12 attention heads. The model has 119,547 shared word pieces vocabulary, and was pre-trained on the entire Wikipedia for each language.",
"cite_spans": [
{
"start": 76,
"end": 97,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF12"
},
{
"start": 538,
"end": 558,
"text": "(Devlin et al., 2018",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BERT",
"sec_num": "4.2."
},
{
"text": "In our experiments, we train our classification models on BERT-Base Multilingual Cased model. For all of our fine-tuning BERT models, we use a maximum sequence size of 50 tokens and a batch size of 32. We add a ' [CLS] ' token at the beginning of each input sequence and, then, feed the final hidden state of ' [CLS] ' to a Softmax linear layer to get predication probabilities across classes. We set the learning rate to 2e \u2212 6 and train for 20 epochs. We save the checkpoint at the end of each epoch, report F1-score and accuracy of the best model, and use the best checkpoint to predict the labels of the TEST set. We fine-tune the BERT model under five settings. We describe each of these next. Vanilla BERT: We fine-tune BERT-Base Multilingual Cased model on TRAIN set of offensive task and hate speech task respectively. We refer these two models to BERT. The offensive model obtains the best result with 8 epochs. As Table 4 shows, for offensive language classification, this model obtains 87.10% accuracy and 78.38 F 1 score on DEV set. We submit the TEST prediction of this model to the shared task and obtain 87.30% accuracy and 77.70 F 1 on the TEST set. The hate speech model obtains best result (accuracy = 95.7-%, F1 = 70.96) with 6 epochs.",
"cite_spans": [
{
"start": 213,
"end": 218,
"text": "[CLS]",
"ref_id": null
},
{
"start": 311,
"end": 316,
"text": "[CLS]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 924,
"end": 931,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "BERT",
"sec_num": "4.2."
},
{
"text": "We use a BERT model fine-tuned with on binary Arabic sentiment dataset as released by (Abdul-Mageed et al., 2020) . We use this off-the-shelf (already trained) model to further fine-tune on offensive and hate speech tasks, respectively. We replace the Softmax linear layer for sentiment classification with a randomly initialized Softmax linear layer for each task. We refer to these two models as BERT-SENTI. We train the BERT-SENTI models on the TRAIN sets for offensive and hate speech tasks respectively. On F 1 score, BERT-SENTI is 0.3 better than vanilla BERT on the offensive task, but 2.95 lower (than vanilla BERT) on the hate speech task. We submit the TEST predictions of both tasks. The offensive model obtain 87.45% accuracy and 80.51 F 1 on TEST. The hate speech model acquire 93.15% accuracy and 61.57 F 1 on TEST.",
"cite_spans": [
{
"start": 86,
"end": 113,
"text": "(Abdul-Mageed et al., 2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BERT-SENTI",
"sec_num": null
},
{
"text": "BERT-EMO Similar to BERT-SENTI, we use a BERT model trained on 8-class Arabic emotion identification from (Abdul-Mageed et al., 2020) to fine-tune on the offensive and hate speech tasks, respectively. We refer to this setting as BERT-EMO. We train the models on the TRAIN sets for both offensive and hate speech tasks for 20 epochs. The offensive model obtains its best result (accuracy = 88.30%, F 1 = 80.39) with 11 epochs. The hate speech model acquires its best result (accuracy = 95.40%, F 1 = 68.54) also with 11 epochs. We do not submit an BERT-EMO on the hate speech task TEST set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT-SENTI",
"sec_num": null
},
{
"text": "BERT-EMO-AUG Similar to BERT-EMO, we also finetune the emotion BERT model (BERT-EMO) with the augmented offensive dataset (AUG-TRAIN-OFF) and augmented hate speech dataset (AUG-TRAIN-HS). On the DEV set, the offensive model acquires its best result (accuracy = 89.60%, F 1 = 82.31) with 13 epochs. The best results for the hate speech model (accuracy = 93.90%, F 1 = 62.52) is obtained with 9 epochs. Our best offensive predi-cation on TEST is BERT-EMO-AUG. It which achieves an accuracy of 89.35% and F 1 of 82.85. We do not submit an BERT-EMO-AUG on the hate speech task TEST set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT-SENTI",
"sec_num": null
},
{
"text": "We described our submission to the offensive language detection in Arabic shared task. We offered a simple method to extend training data and demonstrated the utility of such augmented data empirically. We also deploy affective language models on the two sub-tasks of offensive language detection and hate speech identification. We show that finetuning such affective models is useful, especially in the case of offensive language detection. In the future, we will investigate other methods for improving our automatic offensive and hateful language acquisition methods. We also explore other machine learning methods on the tasks. For example, we plan to investigate the utility of semi-supervised methods as a vehicle of improving our models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
},
{
"text": "Of course hand-crafted features can also be added to a representation fed into a deep learning model. However, we do not do this here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "AraNet (Abdul-Mageed et al., 2020) assigns only positive and negative sentiment labels. In other words, it does not assign neutral labels.8 The 500K collection is extracted via searching a larger sample of \u223c 21M tweets that all have the trigger word Ya. This corpus is also taken from (Abdul-Mageed et al., 2019). Note that a tweet can have both an offensive and a hateful seed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We decided to keep only 199,291 'non-hateful' tweets since our augmented 'hateful' class comprises only 10,489 tweets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Dianet: Bert and hierarchical attention multi-task learning of fine-grained dialect",
"authors": [
{
"first": "M",
"middle": [],
"last": "Abdul-Mageed",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Elmadany",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Rajendran",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Ungar",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.14243"
]
},
"num": null,
"urls": [],
"raw_text": "Abdul-Mageed, M., Zhang, C., Elmadany, A., Rajendran, A., and Ungar, L. (2019). Dianet: Bert and hierarchi- cal attention multi-task learning of fine-grained dialect. arXiv preprint arXiv:1910.14243.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Aranet: A deep learning toolkit for arabic social media",
"authors": [
{
"first": "",
"middle": [],
"last": "Abdul-Mageed",
"suffix": ""
},
{
"first": "Z",
"middle": [
"C"
],
"last": "Muhammad",
"suffix": ""
},
{
"first": "E",
"middle": [
"M B"
],
"last": "Nagoudi",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Hashemi",
"suffix": ""
}
],
"year": 2020,
"venue": "The 4th Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT4), LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abdul-Mageed, Muhammad, Z. C., Nagoudi, E. M. B., and Hashemi, A. (2020). Aranet: A deep learning toolkit for arabic social media. In The 4th Workshop on Open- Source Arabic Corpora and Processing Tools (OSACT4), LREC.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Detecting abusive arabic language twitter accounts using a multidimensional analysis model",
"authors": [
{
"first": "E",
"middle": [],
"last": "Abozinadah",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abozinadah, E. (2017). Detecting abusive arabic lan- guage twitter accounts using a multidimensional anal- ysis model. Ph.D. thesis.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Deep learning for detecting cyberbullying across multiple social media platforms",
"authors": [
{
"first": "S",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Awekar",
"suffix": ""
}
],
"year": 2018,
"venue": "European Conference on Information Retrieval",
"volume": "",
"issue": "",
"pages": "141--153",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Agrawal, S. and Awekar, A. (2018). Deep learning for de- tecting cyberbullying across multiple social media plat- forms. In European Conference on Information Re- trieval, pages 141-153. Springer.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Towards accurate detection of offensive language in online communication in arabic",
"authors": [
{
"first": "A",
"middle": [],
"last": "Alakrot",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Murray",
"suffix": ""
},
{
"first": "N",
"middle": [
"S"
],
"last": "Nikolov",
"suffix": ""
}
],
"year": 2018,
"venue": "Procedia computer science",
"volume": "142",
"issue": "",
"pages": "315--320",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alakrot, A., Murray, L., and Nikolov, N. S. (2018). To- wards accurate detection of offensive language in online communication in arabic. Procedia computer science, 142:315-320.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Are they our brothers? analysis and detection of religious hate speech in the arabic twittersphere",
"authors": [
{
"first": "N",
"middle": [],
"last": "Albadi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Kurdi",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Mishra",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM)",
"volume": "",
"issue": "",
"pages": "69--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Albadi, N., Kurdi, M., and Mishra, S. (2018). Are they our brothers? analysis and detection of religious hate speech in the arabic twittersphere. In 2018 IEEE/ACM Interna- tional Conference on Advances in Social Networks Anal- ysis and Mining (ASONAM), pages 69-76. IEEE.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Deep learning for hate speech detection in tweets",
"authors": [
{
"first": "P",
"middle": [],
"last": "Badjatiya",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Varma",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 26th International Conference on World Wide Web Companion",
"volume": "",
"issue": "",
"pages": "759--760",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Badjatiya, P., Gupta, S., Gupta, M., and Varma, V. (2017). Deep learning for hate speech detection in tweets. In Proceedings of the 26th International Conference on World Wide Web Companion, pages 759-760.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Follow your ideology: Measuring media ideology on social networks",
"authors": [
{
"first": "P",
"middle": [],
"last": "Barber\u00e1",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Sood",
"suffix": ""
}
],
"year": 2015,
"venue": "Annual Meeting of the European Political Science Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barber\u00e1, P. and Sood, G. (2015). Follow your ideology: Measuring media ideology on social networks. In An- nual Meeting of the European Political Science Associa- tion, Vienna, Austria. Retrieved from http://www. gsood. com/research/papers/mediabias. pdf.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Early warning signals for war in the news",
"authors": [
{
"first": "T",
"middle": [],
"last": "Chadefaux",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Peace Research",
"volume": "51",
"issue": "1",
"pages": "5--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chadefaux, T. (2014). Early warning signals for war in the news. Journal of Peace Research, 51(1):5-18.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Political polarization on twitter",
"authors": [
{
"first": "M",
"middle": [
"D"
],
"last": "Conover",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Ratkiewicz",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Francisco",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Gon\u00e7alves",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Menczer",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Flammini",
"suffix": ""
}
],
"year": 2011,
"venue": "Fifth international AAAI conference on weblogs and social media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Conover, M. D., Ratkiewicz, J., Francisco, M., Gon\u00e7alves, B., Menczer, F., and Flammini, A. (2011). Political po- larization on twitter. In Fifth international AAAI confer- ence on weblogs and social media.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Improving cyberbullying detection with user context",
"authors": [
{
"first": "M",
"middle": [],
"last": "Dadvar",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Trieschnigg",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Ordelman",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Jong",
"suffix": ""
}
],
"year": 2013,
"venue": "European Conference on Information Retrieval",
"volume": "",
"issue": "",
"pages": "693--696",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dadvar, M., Trieschnigg, D., Ordelman, R., and de Jong, F. (2013). Improving cyberbullying detection with user context. In European Conference on Information Re- trieval, pages 693-696. Springer.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Seminar users in the arabic twitter sphere",
"authors": [
{
"first": "K",
"middle": [],
"last": "Darwish",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Alexandrov",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Mejova",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Social Informatics",
"volume": "",
"issue": "",
"pages": "91--108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Darwish, K., Alexandrov, D., Nakov, P., and Mejova, Y. (2017). Seminar users in the arabic twitter sphere. In In- ternational Conference on Social Informatics, pages 91- 108. Springer.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "J",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "M.-W",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). Bert: Pre-training of deep bidirectional trans- formers for language understanding. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Merging datasets for aggressive text identification",
"authors": [
{
"first": "P",
"middle": [],
"last": "Fortuna",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Ferreira",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Pires",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Routar",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Nunes",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)",
"volume": "",
"issue": "",
"pages": "128--139",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fortuna, P., Ferreira, J., Pires, L., Routar, G., and Nunes, S. (2018). Merging datasets for aggressive text identifica- tion. In Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018), pages 128- 139.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Convolutional neural networks for toxic comment classification",
"authors": [
{
"first": "S",
"middle": [
"V"
],
"last": "Georgakopoulos",
"suffix": ""
},
{
"first": "S",
"middle": [
"K"
],
"last": "Tasoulis",
"suffix": ""
},
{
"first": "A",
"middle": [
"G"
],
"last": "Vrahatis",
"suffix": ""
},
{
"first": "V",
"middle": [
"P"
],
"last": "Plagianakos",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 10th Hellenic Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Georgakopoulos, S. V., Tasoulis, S. K., Vrahatis, A. G., and Plagianakos, V. P. (2018). Convolutional neural networks for toxic comment classification. In Proceed- ings of the 10th Hellenic Conference on Artificial Intelli- gence, pages 1-6.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The pragmatics of swearing",
"authors": [
{
"first": "T",
"middle": [],
"last": "Jay",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Janschewitz",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of Politeness Research. Language, Behaviour",
"volume": "4",
"issue": "2",
"pages": "267--288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jay, T. and Janschewitz, K. (2008). The pragmatics of swearing. Journal of Politeness Research. Language, Behaviour, Culture, 4(2):267-288.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Benchmarking aggression identification in social media",
"authors": [
{
"first": "R",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "A",
"middle": [
"K"
],
"last": "Ojha",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kumar, R., Ojha, A. K., Malmasi, S., and Zampieri, M. (2018). Benchmarking aggression identification in so- cial media. In Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018), pages 1-11.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Locate the hate: Detecting tweets against blacks",
"authors": [
{
"first": "I",
"middle": [],
"last": "Kwok",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2013,
"venue": "Twenty-seventh AAAI conference on artificial intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kwok, I. and Wang, Y. (2013). Locate the hate: Detecting tweets against blacks. In Twenty-seventh AAAI confer- ence on artificial intelligence.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Detecting hate speech in social media",
"authors": [
{
"first": "S",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1712.06427"
]
},
"num": null,
"urls": [],
"raw_text": "Malmasi, S. and Zampieri, M. (2017). Detect- ing hate speech in social media. arXiv preprint arXiv:1712.06427.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Filtering aggression from the multilingual social media feed",
"authors": [
{
"first": "S",
"middle": [],
"last": "Modha",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mandl",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)",
"volume": "",
"issue": "",
"pages": "199--207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Modha, S., Majumder, P., and Mandl, T. (2018). Filtering aggression from the multilingual social media feed. In Proceedings of the First Workshop on Trolling, Aggres- sion and Cyberbullying (TRAC-2018), pages 199-207.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Arabic offensive language classification on twitter",
"authors": [
{
"first": "H",
"middle": [],
"last": "Mubarak",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Darwish",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Social Informatics",
"volume": "",
"issue": "",
"pages": "269--276",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mubarak, H. and Darwish, K. (2019). Arabic offensive language classification on twitter. In International Con- ference on Social Informatics, pages 269-276. Springer.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Abusive language detection on arabic social media",
"authors": [
{
"first": "H",
"middle": [],
"last": "Mubarak",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Darwish",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Magdy",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Workshop on Abusive Language Online",
"volume": "",
"issue": "",
"pages": "52--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mubarak, H., Darwish, K., and Magdy, W. (2017). Abu- sive language detection on arabic social media. In Pro- ceedings of the First Workshop on Abusive Language On- line, pages 52-56.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Abusive language detection in online user content",
"authors": [
{
"first": "C",
"middle": [],
"last": "Nobata",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tetreault",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Mehdad",
"suffix": ""
},
{
"first": "Chang",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 25th international conference on world wide web",
"volume": "",
"issue": "",
"pages": "145--153",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nobata, C., Tetreault, J., Thomas, A., Mehdad, Y., and Chang, Y. (2016). Abusive language detection in online user content. In Proceedings of the 25th international conference on world wide web, pages 145-153.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Secular vs. islamist polarization in egypt on twitter",
"authors": [
{
"first": "I",
"middle": [],
"last": "Weber",
"suffix": ""
},
{
"first": "V",
"middle": [
"R K"
],
"last": "Garimella",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Batayneh",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 IEEE/ACM international conference on advances in social networks analysis and mining",
"volume": "",
"issue": "",
"pages": "290--297",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weber, I., Garimella, V. R. K., and Batayneh, A. (2013). Secular vs. islamist polarization in egypt on twitter. In Proceedings of the 2013 IEEE/ACM international con- ference on advances in social networks analysis and min- ing, pages 290-297.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Overview of the germeval 2018 shared task on the identification of offensive language",
"authors": [
{
"first": "M",
"middle": [],
"last": "Wiegand",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Siegel",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Ruppenhofer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wiegand, M., Siegel, M., and Ruppenhofer, J. (2018). Overview of the germeval 2018 shared task on the iden- tification of offensive language.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Semeval-2019 task 6: Identifying and categorizing offensive language in social media (offenseval)",
"authors": [
{
"first": "M",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1903.08983"
]
},
"num": null,
"urls": [],
"raw_text": "Zampieri, M., Malmasi, S., Nakov, P., Rosenthal, S., Farra, N., and Kumar, R. (2019). Semeval-2019 task 6: Identi- fying and categorizing offensive language in social me- dia (offenseval). arXiv preprint arXiv:1903.08983.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "A word cloud of unigrams in our extended training offensive data (AUG-TRAIN-OFF).",
"type_str": "figure",
"num": null
},
"FIGREF1": {
"uris": null,
"text": "A word cloud of unigrams in our extended training hate speech data (AUG-TRAIN-HS).",
"type_str": "figure",
"num": null
},
"FIGREF2": {
"uris": null,
"text": "Distribution of Negative and Positive Tweets after applied AraNet on Shared-Task TRAIN Data",
"type_str": "figure",
"num": null
},
"TABREF1": {
"num": null,
"text": "The following are example tweets from the shared task TRAIN split.Examples of offensive and hateful tweets:",
"html": null,
"type_str": "table",
"content": "<table><tr><td>1)</td></tr><tr><td>Oh my Lord, O One and Only, destroy the family of</td></tr><tr><td>Sau'd, for they are the criminals who put children of</td></tr><tr><td>Yemen to suffer. 6</td></tr><tr><td>. 5</td></tr></table>"
},
"TABREF2": {
"num": null,
"text": "on this 500K collection and keep only tweets assigned negative sentiment labels. Tweets that",
"html": null,
"type_str": "table",
"content": "<table><tr><td>Arabic Offensive English</td><td>Arabic Hateful English</td></tr><tr><td>You, fat ass!</td><td>You're Manjawi</td></tr><tr><td>You're mobby</td><td>You're Dandarawi</td></tr><tr><td>You're a tramp</td><td>You Saudis</td></tr><tr><td>You're crazy</td><td>You're Dahbashi</td></tr><tr><td>You, hungry man!</td><td>You, false claimer</td></tr><tr><td>You, morally loose</td><td>You, Houthi</td></tr><tr><td>Oh, whore</td><td>You, Shiite</td></tr><tr><td>You, junky</td><td>You, spay</td></tr><tr><td>You, animals</td><td>You, Ikhwangis</td></tr><tr><td>You, hateful</td><td>You, Ikhwan</td></tr><tr><td>You, dirty woman</td><td>You, son of tramps</td></tr><tr><td>You, tyrant</td><td>You, bastard</td></tr><tr><td>You, salacious</td><td>You, bastard</td></tr><tr><td>You, idiot</td><td>You, son of Jewish woman</td></tr><tr><td>You, silly woman</td><td>You, son of adulterous woman</td></tr><tr><td>You, sinister</td><td>You, son of deceived woman</td></tr><tr><td>You, stupid head</td><td>You, son of pimp</td></tr><tr><td>You, gloomy head</td><td>You, son of adulterous</td></tr><tr><td>You, unworthy woman</td><td>You, Emirate</td></tr><tr><td>You, fools</td><td>You, Itihadi</td></tr></table>"
},
"TABREF3": {
"num": null,
"text": "Examples of offensive and hateful seeds in our lexica carry offensive seeds are labeled as 'offensive' and those carrying hateful seeds are tagged as 'hateful'. This gives us 265,413 offensive tweets and 10,489 hateful tweets.",
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF5": {
"num": null,
"text": "",
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF6": {
"num": null,
"text": "78.38 95.70 70.96 87.30 77.70 95.20 70.51 BERT-SENTI 87.40 78.84 95.50 68.01 87.45 80.51 93.15 61.57",
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td/><td>Dev</td><td/><td/><td/><td>Test</td><td/><td/></tr><tr><td/><td>OFF</td><td/><td>HS</td><td/><td>OFF</td><td/><td>HS</td><td/></tr><tr><td>Model</td><td>Acc</td><td>F1</td><td>Acc</td><td>F1</td><td>Acc</td><td>F1</td><td>Acc</td><td>F1</td></tr><tr><td colspan=\"5\">BERT 87.10 BERT-EMO 88.30 80.39 95.40 68.54</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td colspan=\"7\">BERT-EMO-AUG 89.60 82.31 93.90 62.52 89.35 82.85</td><td>-</td><td>-</td></tr></table>"
},
"TABREF7": {
"num": null,
"text": "Offensive (OFF) and Hate Speech (HS) results on DEV and TEST datasets",
"html": null,
"type_str": "table",
"content": "<table/>"
}
}
}
}