|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T08:06:15.726071Z" |
|
}, |
|
"title": "Quick and Simple Approach for Detecting Hate Speech in Arabic Tweets", |
|
"authors": [ |
|
{ |
|
"first": "Abeer", |
|
"middle": [], |
|
"last": "Abuzayed", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Islamic University of Gaza Qatar University Gaza", |
|
"location": { |
|
"settlement": "Palestine Doha", |
|
"country": "Qatar" |
|
} |
|
}, |
|
"email": "aabuzayed1@students.iugaza.edu.ps" |
|
}, |
|
{ |
|
"first": "Tamer", |
|
"middle": [], |
|
"last": "Elsayed", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Islamic University of Gaza Qatar University Gaza", |
|
"location": { |
|
"settlement": "Palestine Doha", |
|
"country": "Qatar" |
|
} |
|
}, |
|
"email": "telsayed@qu.edu.qa" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "As the use of social media platforms increases extensively to freely communicate and share opinions, hate speech becomes an outstanding problem that requires urgent attention. This paper focuses on the problem of detecting hate speech in Arabic tweets. To tackle the problem efficiently, we adopt a \"quick and simple\" approach by which we investigate the effectiveness of 15 classical (e.g., SVM) and neural (e.g., CNN) learning models, while exploring two different term representations. Our experiments on 8k labelled dataset show that the best neural learning models outperform the classical ones, while distributed term representation is more effective than statistical bag-of-words representation. Overall, our best classifier (that combines both CNN and RNN in a joint architecture) achieved 0.73 macro-F1 score on the dev set, which significantly outperforms the majority-class baseline that achieves 0.49, proving the effectiveness of our \"quick and simple\" approach.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "As the use of social media platforms increases extensively to freely communicate and share opinions, hate speech becomes an outstanding problem that requires urgent attention. This paper focuses on the problem of detecting hate speech in Arabic tweets. To tackle the problem efficiently, we adopt a \"quick and simple\" approach by which we investigate the effectiveness of 15 classical (e.g., SVM) and neural (e.g., CNN) learning models, while exploring two different term representations. Our experiments on 8k labelled dataset show that the best neural learning models outperform the classical ones, while distributed term representation is more effective than statistical bag-of-words representation. Overall, our best classifier (that combines both CNN and RNN in a joint architecture) achieved 0.73 macro-F1 score on the dev set, which significantly outperforms the majority-class baseline that achieves 0.49, proving the effectiveness of our \"quick and simple\" approach.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Twitter is a place where 330 million users (in 2019) from 1 every background, race, religion, and nationality interact and communicate, and freely share their ideas, opinions, and beliefs. This makes Twitter easy to exploit in sharing content that targets and threatens individuals or groups based on their common characteristics or identities by spreading hate speech. According to Twitter hateful conduct policy , hate speech is to \"attack or threaten other 2 people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease\", such as the tweet: \u202b\u0627\ufedf\ufbff\ufeec\ufeee\u062f\"\u202c \u202b\ufee3\ufee8\ufe92\ufeca\u202c \u202b\ufbfe\ufe8e\u202c \u202b\u0627\u0631\ufeeb\ufe8e\ufe91\ufbff\ufbff\ufee6\u202c \u202b\ufbfe\ufe8e\u202c \u202b\ufea7\ufeee\u0627\u0631\u062c\u202c \u202b\"\ufbfe\ufe8e\u202c (O Kharijites, terrorists, the source of the Jews). Twitter encourages users to report any kind of hate speech that violates the hateful conduct policy, so that an action can be made such as suspending the user or deleting the tweet. Despite the considerable effort that social media sites are making in trying to curb hate speech, it is still threatening the online communities and users are still seeing it on many platforms. As hate speech might result in serious physical or mental abuse, there is an imperative need to detect and prevent such content on social media platforms.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Several researchers studied hate speech in the social media domain and proposed various approaches to detect it with more focus on English language, e.g., Malmasi and Zampieri (2018) , Watanabe et al. (2018) , Zhang and Luo (2018) , and . However, detecting hate speech in Arabic content is still nascent. The richness and complexity of the nature and structure of the Arabic language, the variety of dialects, and the problems at orthographic, morphological, and syntactic levels make detecting hate speech in Arabic very challenging.", |
|
"cite_spans": [ |
|
{ |
|
"start": 155, |
|
"end": 182, |
|
"text": "Malmasi and Zampieri (2018)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 185, |
|
"end": 207, |
|
"text": "Watanabe et al. (2018)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 210, |
|
"end": 230, |
|
"text": "Zhang and Luo (2018)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "In this work, we conduct a preliminary study on the detection of hate speech in Arabic tweets as part of our participation in the Hate Speech Detection subtask in OSACT4 workshop (Mubarak et al., 2020) . Given the 3 tight time we had for participation , we aim to tackle the 4 1 https://www.statista.com/statistics/282087/number-of-monthly-active-twitter-users/ 2 https://help.twitter.com/en/rules-and-policies/hateful-conduct-policy 3 http://edinburghnlp.inf.ed.ac.uk/workshops/OSACT4/ 4 We only had 3 days before the submission deadline.", |
|
"cite_spans": [ |
|
{ |
|
"start": 179, |
|
"end": 201, |
|
"text": "(Mubarak et al., 2020)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "classification problem in a simple, quick, yet effective approach. We elect to use \"simple\" features that are not problem-specific but easy to compute or use, while leveraging the richness, maturity, and strong support for \"quick\" development that current popular machine learning frameworks (e.g., Keras) provide. Adopting this quick and simple approach for developing our classification system for hate speech detection, we investigate the performance of several learning models and aim to answer two research questions in the context of this problem: RQ1. Is distributed (latent) word representation (e.g., Word2Vec embeddings) more effective than standard statistical bag-of-words representation (e.g., tf-idf)? RQ2. Are neural models more effective than classical machine learning models?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "To answer both questions, we conducted experiments over seven classical and eight neural learning models using the labelled dataset of 8,000 tweets, provided by the shared task organizers, and submitted two runs on the test set. Our results show that, surprisingly, the bag-of-words tf-idf representation is more effective than distributed word embeddings representation; however the best neural models outperform classical models. Overall, our best classifier achieved a reasonable 0.73 macro-F1 score on the dev set, which significantly outperforms the majority-class baseline that achieves 0.49, proving the effectiveness of our quick and simple approach.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Our contribution in this work is two-fold: 1. We conducted a preliminary study investigating the performance of 15 different classical and neural learning models for detecting hate speech in Arabic tweets. 2. We demonstrated a simple and quick approach of developing a system that is implemented in less than 3 days to tackle the problem, yet achieved reasonable performance. We make all of our code open-source for the research community .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "The paper is organized as follows. Section 2 describes related work. Section 3 outlines our approach in tackling the problem. Section 4 presents our experimental evaluation results. Section 5 concludes our work with potential future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "5", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As mentioned earlier, there are several research studies conducted to study hate speech in online communities over English content. Mondal et al. (2017) conducted a study in online social media to understand how social media platforms are rich with hate speech and to investigate the most popular hate expressions and the main targets of online hate speech. Malmasi and Zampieri (2018) aimed to distinguish hate speech from general profanity using a dataset annotated as \"hate, offensive, and ok\", with advanced ensemble classifiers and stacked generalization along with various features such as n-grams, skip-grams, and clustering-based word representations. Additionally, Watanabe et al. (2018) classify tweets based on three labels (clean, offensive and hateful) using sentiment-based features, semantic features, unigram features, and pattern features. Zhang and Luo (2018) and also conducted studies on Twitter hate speech for the English language.", |
|
"cite_spans": [ |
|
{ |
|
"start": 132, |
|
"end": 152, |
|
"text": "Mondal et al. (2017)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 358, |
|
"end": 385, |
|
"text": "Malmasi and Zampieri (2018)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 674, |
|
"end": 696, |
|
"text": "Watanabe et al. (2018)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 857, |
|
"end": 877, |
|
"text": "Zhang and Luo (2018)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Other researchers focused on detecting offensive language over Arabic content, where a number of studies were conducted to detect offensive and abusive language for Arabic Tweets and for YouTube comments (Mubarak and Darwish, 2019; Alakrot et al., 2018; Mohaouchane et al., 2018; Mubarak et al., 2017) . However, hate speech is different from offensive and abusive language (Malmasi and Zampieri, 2018) . Also, Zhang and Luo (2018) argue the same point and pointed out that the term \"hate speech\" might be overlapping with other terms such as \"offensive\", \"profane\" and \"abusive\". In order to distinguish them, they defined hate speech as \"targeting individuals or groups on the basis of their characteristics and demonstrating a clear intention to incite harm, or to promote hatred and this speech may or may not use offensive or profane words\".", |
|
"cite_spans": [ |
|
{ |
|
"start": 204, |
|
"end": 231, |
|
"text": "(Mubarak and Darwish, 2019;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 232, |
|
"end": 253, |
|
"text": "Alakrot et al., 2018;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 254, |
|
"end": 279, |
|
"text": "Mohaouchane et al., 2018;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 280, |
|
"end": 301, |
|
"text": "Mubarak et al., 2017)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 374, |
|
"end": 402, |
|
"text": "(Malmasi and Zampieri, 2018)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 411, |
|
"end": 431, |
|
"text": "Zhang and Luo (2018)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Consequently, hate speech should be distinguished from other offensive and profane languages. Thus, other studies focus only on hate speech detection. Albadi et al. (2018) developed a system to detect religious hate speech in Arabic tweets. They used three various approaches to tackle this problem. Firstly, they constructed an Arabic lexicon of religious hate speech and used it to classify tweets to \"hate\" if the tweet terms exist in the lexicon, otherwise it is labelled as \"not hate\". Secondly, they trained Logistic Regression and SVM classifiers using n-gram models. Finally, a GRU model with the pre-trained embedding model AraVec (Twitter-CBOW 300D architecture) showing 0.77 F1 score was adopted.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Moreover, Chowdhury et al. (2019) studied religious hate speech in Arabic tweets too, where they argued that considering the community interactions can raise the ability to detect hate speech content on social media. To investigate this, Arabic word embedding (AraVec, Twitter-CBOW 300D architecture), social network graphs, and neural networks (e.g., RNN+CNN) were used. They pointed out that considering community interactions significantly improves the result and outperforms Albadi et al. (2018) performance, where the combination of social network graphs and joint LSTM and CNN model achieved 0.78 F1 score.", |
|
"cite_spans": [ |
|
{ |
|
"start": 479, |
|
"end": 499, |
|
"text": "Albadi et al. (2018)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Furthermore, there are studies on hate speech detection in multilingual tweets including Arabic. Ousidhoum et al. (2019) used the bag of words (BOW) as features with Logistic Regression (LR) and deep learning models to detect hate speech in multilingual tweets. Smedt et al. (2018) conducted an experiment to detect online Jihadist hate speech in multilingual tweets, where SVM was used to classify tweets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 97, |
|
"end": 120, |
|
"text": "Ousidhoum et al. (2019)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 262, |
|
"end": 281, |
|
"text": "Smedt et al. (2018)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "In this study, we focus on detecting hate speech in Arabic tweets using several classical and neural learning models with tf-idf and word embeddings features. We adopt a quick and simple approach of developing our classifiers and conducting our experiments, focusing on unigram representations that are problem-independent, while leveraging the power and ease-of-use of existing learning frameworks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "We approach hate speech detection as a supervised learning problem. In our study, we experimented with several classical and neural learning models trained for detecting Arabic hate speech on Twitter. We adopted basic text preprocessing and two main feature extraction techniques for comparison.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "To prepare our dataset for the feature extraction process, basic text preprocessing is done as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 Punctuations, foreign characters and numbers (including user mentions and URLs), and diacritics (tashdid, fatha, tanwin fath, damma, tanwin damm, kasra, tanwin kasr, sukun, and tatwil/kashida) are all removed. We also removed repeated characters. \u2022 The remaining Arabic text is normalized. Letters are normalized as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "\u2022 \u202b\"\u0625\u0623\u0622\u0627\"{\u202c to \u202b}\"\u0627\"\u202c \u2022 \u202b\"\u064a\"{\u202c to \u202b}\"\u0649\"\u202c \u2022 \u202b\"\u0624\"{\u202c to \u202b}\"\u0621\"\u202c \u2022 \u202b\"\u0626\"{\u202c to \u202b}\"\u0621\"\u202c \u2022 \u202b\"\u0629\"{\u202c to \u202b}\"\u0647\"\u202c \u2022 \u202b\"\u06af\"{\u202c to \u202b\u0643\"\u202c \"} While some normalization has been done through building the pre-trained word embedding model (AraVec2.0) used in our experiment, we augmented it with additional steps.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We adopted two main simple and problem-independent feature extraction techniques: tf-idf and word embeddings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Extraction", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Firstly, tf-idf term weight (term frequency-inverse document frequency) indicates how relevant a term is to a document in a collection of documents. In our experiments, tf-idf weights are only used with the classical machine learning algorithms in order to compare against using word embeddings as features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Extraction", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Secondly, word embeddings are the most popular distributed representation of words (or terms). Each word in the vocabulary is represented as a vector of a few hundred dimensions, where words that have the same meaning are closer to each other, while the words with different meanings are far apart. This is done by learning the vector representation of the words through the contexts in which they appear. One of the popular techniques for efficiently learning a standalone word embedding from a text corpus is Word2Vec (Mikolov et al., 2013) . There are two different learning models to learn the embeddings, Skip Gram and Continuous Bag of Words (CBOW). The CBOW model learns the embeddings by predicting the current word using the context as an input, while the continuous skip-gram takes the current word as input and learns the embeddings by predicting the surrounding words ( Mikolov et al., 2013) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 520, |
|
"end": 542, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 880, |
|
"end": 903, |
|
"text": "( Mikolov et al., 2013)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Extraction", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In our experiments, we used the pre-trained Arabic word embedding model AraVec2.0 (Soliman et al., 2017) , which provides various pre-trained Arabic word embedding model architectures; each is trained on one of three different datasets: tweets, Web pages, and Wikipedia Arabic articles. Moreover, for each dataset, two models are built: one using Skip Gram and another using CBOW. For the purpose of this study, we used the pre-trained SkipGram 300D-embeddings trained on more than 77M tweets, since we work on tweets. We used the pre-trained model in both classical and neural learning approaches. To use it with classical learning algorithms, the average vector of all the embeddings of the tweet words is computed and used as the feature vector of the tweet. However, for the neural learning models, the embedding vectors are used to initialize the weights of the embedding layer, which is then connected to the rest of the layers in the network.", |
|
"cite_spans": [ |
|
{ |
|
"start": 82, |
|
"end": 104, |
|
"text": "(Soliman et al., 2017)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Extraction", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "This section describes the classical and neural learning models used in our experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We experimented with various classical machine learning models, namely SVM, Random Forest, XGBoost, Extra Trees, Decision Trees, Gradient Boosting, and Logistic Regression. These models are trained along with both types of features we described earlier, tf-idf and pre-trained word embeddings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classical Learning Models", |
|
"sec_num": "3.3.1" |
|
}, |
|
{ |
|
"text": "We experimented with two types of neural models, Recurrent Neural Networks (RNN), and Convolutional Neural Networks (CNN). We tried different RNN architectures, namely Long Short-Term Memory (LSTM), Bidirectional LSTM (BLSTM), and Gated Recurrent Unit (GRU). We also tried a combination of both CNN and RNN. Previous studies showed that the joint CNN and RNN architecture outperforms CNN or RNN alone in natural language processing tasks such as sentiment analysis and text classification tasks (Wang et al., 2016 and Zhou et al. 2015) . This combined architecture allows the network to learn local features from the CNN, and long-term dependencies, positional relation of features, and global features from the RNN (Wang et al., 2016) . The combined architecture used in this work consists of one CNN layer with max-pooling and time distributed layer, followed by one RNN layer and dropout layer, as shown in the example joint CNN and LSTM architecture in Figure 1 . ", |
|
"cite_spans": [ |
|
{ |
|
"start": 495, |
|
"end": 517, |
|
"text": "(Wang et al., 2016 and", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 518, |
|
"end": 535, |
|
"text": "Zhou et al. 2015)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 716, |
|
"end": 735, |
|
"text": "(Wang et al., 2016)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 957, |
|
"end": 965, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Neural Learning Models", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "In this section, we present and analyze the performance of our trained models. We start with the experimental setup, followed by the analysis of the two experiments we conducted to answer the two research questions. Finally, we discuss the results of our two submitted runs to the shared task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Evaluation", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "For the purpose of this study, we use SemEval 2020 Arabic offensive language dataset (OffensEval 2020, Subtask B for detecting hate speech) (Mubarak et al., 2020) . The dataset was split into train, dev, and test sets (70%, 10%, and 20% respectively). There are 7,000 training tweets, only 361 of them (about 5.2%) are labelled as hate speech. There are 1,000 dev tweets, only 44 of them (4.4%) are labelled as hate speech. This shows how the two classes in the dataset are clearly unbalanced.", |
|
"cite_spans": [ |
|
{ |
|
"start": 140, |
|
"end": 162, |
|
"text": "(Mubarak et al., 2020)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "As expected, we used the training set to learn each model's parameters and the dev set to tune its hyperparameters. The hyperparameters and their tuned values are listed in Table 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 173, |
|
"end": 180, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Kernel size (CNN) 5", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Number of filters (CNN) 25", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Dropout rate (Regularizer) 0.5", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Number of hidden units (RNN) 16", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Learning rate (Adam optimizer) 0.001 Table 1 : Tuned values of the hyperparameters.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 37, |
|
"end": 44, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Number of hidden units (RNN) 16", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To answer the two research questions we listed in Section 1, we conducted two main experiments. The first compares the use of tf-idf vs word embeddings features, conducted on classical machine learning models. The second compares classical vs. neural models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Number of hidden units (RNN) 16", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We evaluated the performance of our models using two measures: macro-averaged F1 (the official shared task measure) and F1 score on the hate speech (HS) class (since the target HS class is scarce). All reported results in this section are on the dev set unless otherwise mentioned. Notice that the majority-class baseline on the dev set yields a 0.49 macro-F1 score.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Number of hidden units (RNN) 16", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "It is worth noting that all of our development and experiments were performed through Google Colaboratory using Python and Keras libraries.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Number of hidden units (RNN) 16", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To answer RQ1, we conducted an experiment over the seven classical models listed in Section 3.3.1 using both tf-idf and pre-trained word embeddings (AraVec 2.0).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RQ1: tf-idf vs. Word Embeddings", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Figures 2 depicts the performance of the models in each of the two cases measured in macro-averaged F1. There are several interesting observations. First, we notice that the performance using tf-idf varies from 0.49 to 0.68, while using word embeddings it varies from 0.51 to 0.57. Second, some models (e.g., SVM) exhibited slightly better performance using word embeddings, however more models (e.g., Random Forest) exhibited much better performance using tf-idf. Overall, the best three models (namely Extra Trees, Random Forest, and Gradient Boosting, respectively) are all indeed using tf-idf. This is a surprising result, since tf-idf features neither capture meaning nor are contextualized; both attributes are (or at least should be) captured by word embeddings. This observation definitely needs more investigation. Figure 3 illustrates the performance of the models in each of the two cases, but this time measured in F1 over the positive class. It indicates very similar, but even stronger, observations. Moreover, it clearly shows that the task of detecting the HS tweets is, not surprisingly, much harder than non-HS, achieving an F1 score of 0.39 at best. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 824, |
|
"end": 832, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "RQ1: tf-idf vs. Word Embeddings", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We now turn our attention to RQ2, which is concerned with comparing classical and neural models. We considered the best-performing classical model, i.e., Extra Trees with tf-idf features, as the baseline , which we compare against eight neural models:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RQ2: Classical vs. Neural Models", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u2022 The first three are RNN models, namely LSTM, BLSTM, and GRU. \u2022 The fourth is CNN.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RQ2: Classical vs. Neural Models", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u2022 The next three are combined CNN and RNN models, one for each RNN type. \u2022 The last one is a combined CNN and LSTM version that is trained on an oversampled training data to address the unbalanced data problem, where some HS (i.e., the minority class) examples are replicated. Due to time constraints, we only trained the neural models using word embeddings. According to the results of the first experiment in Section 4.2, using tf-idf features is worth trying too. We defer this to future exploration. Table 2 : Examples (from the dev-set) of correct (bolded) and incorrect (underlined) classification using Extra Trees models with word embeddings and tf-idf and the combined CNN+LSTM neural model. Figure 4 depicts the performance of all tried neural models along with the baseline measured in macro-averaged F1. Similar to the first experiment, we have several interesting observations. First, the figure shows that combining CNN and RNN models improved performance over individual models. Second, the classical model unexpectedly exhibits a comparable performance to several neural models. Third, combining CNN and LSTM exhibited the best performance, outperforming all other neural models in addition to the baseline classical model. Finally, oversampling did not help, at least when applied to the best performing model. Figure 5 indicates the same exact performance patterns measured in F1 on the HS class. However, the performance gap between the best neural model and the baseline is even widened. Table 2 shows examples (from the dev-set) of correct and incorrect classification using both of tf-idf and word embeddings with Extra Trees classifier and combined CNN and LSTM neural model. The table shows that the models made different mistakes. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 504, |
|
"end": 511, |
|
"text": "Table 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 701, |
|
"end": 709, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF3" |
|
}, |
|
{ |
|
"start": 1328, |
|
"end": 1336, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF4" |
|
}, |
|
{ |
|
"start": 1508, |
|
"end": 1515, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "RQ2: Classical vs. Neural Models", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Based on the results above, we chose the combined CNN and LSTM model in addition to its oversampling version to submit results on the test set to the shared task. Table 3 shows the results of the two models as reported by the task organizers, compared to the results on the dev set. As expected, the performance on the test set is slightly lower than on the dev set, however the unsampled version still outperforms the oversampled one. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 163, |
|
"end": 170, |
|
"text": "Table 3", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Submitted Runs", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "In this paper, we presented a quick and simple approach to tackle the problem of detecting hate speech in Arabic tweets. Our approach adopts simple problem-independent features to represent terms in tweets and leverages the quick development service supported by existing powerful machine learning libraries. We compared 15 classical and neural learning models along with two different term representations (tf-idf and word embeddings). Our experiments over 8k labelled dataset of Arabic tweets showed that tf-idf representation is more effective than word embeddings when used in classical models, and that the best neural learning model (a joint CNN and LSTM architecture) outperforms the classical ones. To our knowledge, this is the first time a combined CNN and LSTM is used to detect hate speech over Arabic tweets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "The classification performance achieved by this combined model exhibited a significant improvement over the majority-class baseline, proving the effectiveness of our \"quick and simple\" approach. For future work, we plan to conduct several experiments. Firstly, as it shows better performance with classical learning models, we will consider using tf-idf representation with neural models as well. Secondly, we plan to experiment with transfer learning techniques to leverage the models that are trained for related tasks such as offensive language detection. Thirdly, we will further investigate the sampling techniques to overcome the unbalanced data problem. Finally, since the pre-trained model BERT yields the state of the art performance in several natural language processing tasks (Devlin et al., 2018) , it is worth trying for hate speech detection too.", |
|
"cite_spans": [ |
|
{ |
|
"start": 788, |
|
"end": 809, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "https://github.com/AbeerAbuZayed/QUIUG_Hate-Speech-Detection_OSACT4-Workshop", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Towards accurate detection of offensive language in online communication in arabic", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Alakrot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Murray", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Nikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Procedia computer science", |
|
"volume": "142", |
|
"issue": "", |
|
"pages": "315--320", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alakrot, A., Murray, L. and Nikolov, N.S., 2018. Towards accurate detection of offensive language in online communication in arabic. Procedia computer science , 142, pp.315-320.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Are they our brothers? Analysis and detection of religious hate speech in the Arabic Twittersphere", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Albadi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Kurdi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Mishra", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "IEEE/ACM ASONAM", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "69--76", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Albadi, N., Kurdi, M. and Mishra, S., 2018, August. Are they our brothers? Analysis and detection of religious hate speech in the Arabic Twittersphere. In 2018 IEEE/ACM ASONAM (pp. 69-76).", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "ARHNet-Leveraging Community Interaction for Detection of Religious Hate Speech in Arabic", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Chowdhury", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Didolkar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Sawhney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Shah", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of ACL: Student Research Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "273--280", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chowdhury, A.G., Didolkar, A., Sawhney, R. and Shah, R., 2019. ARHNet-Leveraging Community Interaction for Detection of Religious Hate Speech in Arabic. In Proceedings of ACL: Student Research Workshop (pp. 273-280).", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "BERT", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1810.04805" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Devlin, J., Chang, M.W., Lee, K. and Toutanova, K., 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 .", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Challenges in discriminating profanity from hate speech", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Malmasi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Zampieri", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Journal of Experimental & Theoretical Artificial Intelligence", |
|
"volume": "30", |
|
"issue": "2", |
|
"pages": "187--202", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Malmasi, S. & Zampieri, M., 2018. Challenges in discriminating profanity from hate speech. Journal of Experimental & Theoretical Artificial Intelligence, 30(2), pp. 187-202.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3111--3119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S. and Dean, J., 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems (pp. 3111-3119).", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Detecting Offensive Language on Arabic Social Media Using Deep Learning", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Mohaouchane", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Mourhir", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Nikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "IEEE SNAMS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "466--471", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mohaouchane, H., Mourhir, A. and Nikolov, N.S., 2019, October. Detecting Offensive Language on Arabic Social Media Using Deep Learning. In IEEE SNAMS (pp. 466-471).", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "A Measurement Study of Hate Speech in Social Media", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Mondal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Silva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Benevenuto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "85--94", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mondal, M., Silva, L. A. & Benevenuto, F., 2017. A Measurement Study of Hate Speech in Social Media.. New York, USA, ACM, p. 85-94.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Arabic Offensive Language Classification on Twitter", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Mubarak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Darwish", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "International Conference on Social Informatics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "269--276", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mubarak, H. & Darwish, K., 2019. Arabic Offensive Language Classification on Twitter. In International Conference on Social Informatics (pp. 269-276). Springer.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Abusive language detection on Arabic social media", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Mubarak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Darwish", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Magdy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the First Workshop on Abusive Language Online", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "52--56", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mubarak, H., Darwish, K. and Magdy, W., 2017, August. Abusive language detection on Arabic social media. In Proceedings of the First Workshop on Abusive Language Online (pp. 52-56).", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Overview of OSACT4 Arabic Offensive Language Detection Shared Task", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Mubarak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Darwish", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Magdy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Elsayed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Al-Khalifa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT)", |
|
"volume": "4", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mubarak, H., Darwish, K., Magdy, W., Elsayed, T. and Al-Khalifa, H., 2020. Overview of OSACT4 Arabic Offensive Language Detection Shared Task. In Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT) , vol. 4.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Automatic detection of online jihadist hate speech", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Smedt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "De Pauw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Van Ostaeyen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1803.04596" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Smedt, T., De Pauw, G. and Van Ostaeyen, P., 2018. Automatic detection of online jihadist hate speech. arXiv preprint arXiv:1803.04596.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Aravec: A set of arabic word embedding models for use in arabic nlp", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Soliman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Eissa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "El-Beltagy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Procedia Computer Science", |
|
"volume": "117", |
|
"issue": "", |
|
"pages": "256--265", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Soliman, A.B., Eissa, K. and El-Beltagy, S.R., 2017. Aravec: A set of arabic word embedding models for use in arabic nlp. Procedia Computer Science, 117, pp.256-265.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Combination of convolutional and recurrent neural network for sentiment analysis of short texts", |
|
"authors": [ |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Luo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of COLING 2016: Technical papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2428--2437", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wang, X., Jiang, W. and Luo, Z., 2016. Combination of convolutional and recurrent neural network for sentiment analysis of short texts. In Proceedings of COLING 2016: Technical papers (pp. 2428-2437).", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Hate speech on Twitter: A pragmatic approach to collect hateful and offensive expressions and perform hate speech detection", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Watanabe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Bouazizi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Ohtsuki", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "IEEE Access", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "13825--13835", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Watanabe, H., Bouazizi, M. & Ohtsuki, T., 2018. Hate speech on Twitter: A pragmatic approach to collect hateful and offensive expressions and perform hate speech detection. IEEE Access, 6, pp. 13825-13835.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Hate speech detection: A solved problem? the challenging case of long tail on Twitter", |
|
"authors": [ |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Luo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhang, Z. & Luo, L., 2018. Hate speech detection: A solved problem? the challenging case of long tail on Twitter. CoRR abs/1803.03662 (2018).", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Hate Speech Detection Using a Convolution-LSTM Based Deep Neural Network", |
|
"authors": [ |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Robinson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": ".", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Tepper", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of ACM WWW conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhang, Z., Robinson, D. & Tepper, a. J., 2018. Hate Speech Detection Using a Convolution-LSTM Based Deep Neural Network. In Proceedings of ACM WWW conference (WWW'2018). New York, NY, USA.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "A C-LSTM neural network for text classification", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Lau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1511.08630" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhou, C., Sun, C., Liu, Z. and Lau, F., 2015. A C-LSTM neural network for text classification. arXiv preprint arXiv:1511.08630.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"uris": null, |
|
"text": "Joint CNN and LSTM model architecture.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"uris": null, |
|
"text": "Macro F1 of classical learning models.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"uris": null, |
|
"text": "F1 on HS class of classical learning models.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"num": null, |
|
"uris": null, |
|
"text": "Macro F1 of neural learning models compared to the best-performing classical model.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF4": { |
|
"num": null, |
|
"uris": null, |
|
"text": "F1 on HS class of neural learning models compared to the best-performing classical model.", |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>Tweet</td><td/><td>True label</td><td>Predicted (Extra Trees, Embeddings)</td><td>Predicted (Extra Trees, tf-idf)</td><td>Predicted (CNN+LST M)</td></tr><tr><td/><td/><td>HS</td><td>HS</td><td>HS</td><td>HS</td></tr><tr><td>Shut up, O loser and traitor!</td><td>\u202b\ufecb\ufee4\ufbff\ufede\u202c \u202b\ufbfe\ufe8e\u202c \u202b\ufea7\ufe8e\ufbfe\ufee6\u202c \u202b\ufbfe\ufe8e\u202c \u202b\ufed3\ufe8e\ufeb7\ufede\u202c \u202b\ufbfe\ufe8e\u202c \u202b\ufe91\ufeb2\u202c</td><td>NOT_HS</td><td>NOT_HS</td><td>NOT_HS</td><td>NOT_HS</td></tr><tr><td colspan=\"2\">\u202b\u0627\u062f\u062f\u062f\u062f\u062f\u062f\u0627\u0627\u0627\u202c \u202b\u0627\ufedf\ufea4\ufebc\ufe8e\ufedf\ufe94\u202c \u202b\ufea7\ufe92\ufeee\u0627\u202c \u202b\u0627\ufedf\ufebc\ufeaa\u0627\u0627\u0627\u0627\u0627\u0627\u0631\u0629\u202c \u202b\u0641\u202c \u202b\u0627\ufefb\ufeeb\ufee0\ufef2\u202c \u202b\ufbfe\ufe8e\u0627\u062f\ufbfe\ufe90\u202c \u202b\ufecb\ufee4\ufeae\u0648\u202c \u202b\ufbfe\ufe8e\u202c \u202b\ufeeb\ufee8\ufbff\ufeaa\u064a\u202c \u202b\ufbfe\ufe8e\u202c</td><td/><td/><td/><td/></tr><tr><td/><td>\u202b#\ufedf\ufee0\ufea8\ufee0\ufed2_\u062f\u0631\u0631_\ufbfe\ufe8e_\u0632\ufee3\ufe8e\ufedf\ufeda\u202c</td><td>HS</td><td>NOT_HS</td><td>HS</td><td>NOT_HS</td></tr><tr><td colspan=\"2\">O Hinaidi, O Amr Adeeb, Al-Ahly is taking the lead. #GoBack_Zamalek team.</td><td/><td/><td/><td/></tr><tr><td colspan=\"2\">\u202b\u0631\ufe91\ufee8\ufe8e\u202c \u202b\ufbfe\ufecc\ufeae\ufed3\ufeee\u0634\u202c \u202b\ufee3\ufe8e\u202c \u202b\u0627\ufedf\ufee0\ufef0\u202c \u202b\u0627\ufedf\ufedc\ufed4\ufeae\u0629\u202c \u202b\u0632\u0649\u202c \u202b\u062f\ufbfe\ufee4\ufed8\ufeae\u0627\ufec3\ufbff\ufe94\u202c \u202b\ufecb\ufee8\ufeaa\ufee7\ufe8e\u202c \u202b\ufbfe\ufe92\ufed8\ufef0\u202c \u202b\ufecb\ufe8e\u0648\u0632\u202c \u202b\u0627\ufee7\ufe96\u202c \u202b\ufbfe\ufe8e\u202c \u202b\ufee3\ufeae\ufe97\ufeaa\u202c \u202b\ufbfe\ufe8e\u202c \u202b\u0632\ufee7\ufeaa\ufbfe\ufed6\u202c \u202b\ufbfe\ufe8e\u202c \u202b\ufedb\ufe8e\ufed3\ufeae\u202c \u202b\ufbfe\ufe8e\u202c</td><td/><td/><td/><td/></tr><tr><td colspan=\"2\">O bastard and Godless! You want us to have a democracy like the infidels, who</td><td>NOT_HS</td><td>NOT_HS</td><td>NOT_HS</td><td>HS</td></tr><tr><td>do not believe in God.</td><td/><td/><td/><td/><td/></tr><tr><td colspan=\"2\">1417 \u202b\ufed7\ufe92\ufede\u202c \u202b\u0627\ufefb\ufe97\ufea4\ufe8e\u062f\u202c \u202b\ufe91\ufec4\ufeee\ufefb\u062a\u202c \u202b\ufeb7\ufeee\u0641\u202c \u202b\ufe9f\ufe8e\ufeeb\ufede\u202c \u202b\ufbfe\ufe8e\u202c \u202b\ufe9f\ufeaa\u0629\u202c \u202b\ufebb\ufed0\ufbff\ufeae\u202c \u202b\ufbfe\ufe8e\u202c \u202b\ufec3\ufea4\ufee0\ufe92\ufef2\u202c \u202b\ufbfe\ufe8e\u202c \u202b\ufeeb\ufeec\ufeec\ufeec\ufeec\ufeec\ufeec\ufeec\ufeec\ufeec\ufeec\ufeec\ufeec\ufeec\ufeec\ufeec\ufeec\ufeec\ufeec\ufeec\ufeec\ufeea\u202c</td><td/><td/><td/><td/></tr><tr><td colspan=\"2\">Hahahahahahaha! You, little kid of Jeddah, you ignorant. Check the Al-Ittihad</td><td>HS</td><td>NOT_HS</td><td>NOT_HS</td><td>HS</td></tr><tr><td>[tournaments/ championships] before 1417.</td><td/><td/><td/><td/><td/></tr></table>", |
|
"text": "\u202b\u0627\ufedf\ufbff\ufeec\ufeee\u062f\u202c \u202b\ufee7\ufeb4\ufede\u202c \u202b\ufbfe\ufe8e\u202c \u202b\ufeb3\ufee0\ufeee\u0644\u202c \u202b\u0622\u0644\u202c \u202b\ufbfe\ufe8e\u202c \u202b\u0625\ufbfe\ufeae\u0627\u0646\u202c \u202b\u0623\ufe97\ufe92\ufe8e\u0639\u202c \u202b\u0625\ufea3\ufee8\ufe8e\u202cWe are followers of Iran, O family of Salul, descendants of the Jews.", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"type_str": "table", |
|
"content": "<table/>", |
|
"text": "Macro F1 scores of submitted models.", |
|
"html": null, |
|
"num": null |
|
} |
|
} |
|
} |
|
} |