{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:06:22.766769Z" }, "title": "ASU OPTO at OSACT4 -Offensive Language Detection for Arabic text", "authors": [ { "first": "Amr", "middle": [], "last": "Keleg", "suffix": "", "affiliation": { "laboratory": "", "institution": "Ain Shams University", "location": { "addrLine": "2 Optomatica 1 Cairo" } }, "email": "amr.keleg@eng.asu.edu.eg" }, { "first": "Samhaa", "middle": [ "R" ], "last": "El-Beltagy", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Mahmoud", "middle": [], "last": "Khalil", "suffix": "", "affiliation": { "laboratory": "", "institution": "Ain Shams University", "location": { "addrLine": "2 Optomatica 1 Cairo" } }, "email": "mahmoud.khalil@eng.asu.edu.eg" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In the past years, toxic comments and offensive speech are polluting the internet and manual inspection of these comments is becoming a tiresome task to manage. Having a machine learning based model that is able to filter offensive Arabic content is of high need nowadays. In this paper, we describe the model that was submitted to the Shared Task on Offensive Language Detection that is organized by (The 4th Workshop on Open-Source Arabic Corpora and Processing Tools). Our model makes use transformer based model (BERT) to detect offensive content. We came in the fourth place in subtask A (detecting Offensive Speech) and in the third place in subtask B (detecting Hate Speech).", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "In the past years, toxic comments and offensive speech are polluting the internet and manual inspection of these comments is becoming a tiresome task to manage. Having a machine learning based model that is able to filter offensive Arabic content is of high need nowadays. In this paper, we describe the model that was submitted to the Shared Task on Offensive Language Detection that is organized by (The 4th Workshop on Open-Source Arabic Corpora and Processing Tools). Our model makes use transformer based model (BERT) to detect offensive content. We came in the fourth place in subtask A (detecting Offensive Speech) and in the third place in subtask B (detecting Hate Speech).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "During the past decade, Social media platforms such as Facebook and Twitter have attracted millions of users from the Arab region. These platforms have given people the chance to express their ideas, beliefs and feelings. Unlike real life conversations, people tend to be more aggressive when they are communicating through this virtual online world. The aggression might also reach an extreme case where racist, violent and completely unacceptable words are shared online. Sites are trying to control the spread of these toxic comments by manually moderating and checking the reports that other users are filing. Moreover, Some services provide an automatic way to automatically filter offensive content. For example, Google Search has an option to use \"SafeSearch Filters\" which is allows filtering out any harmful or violent content before presenting the search results to the user. All these facts have attracted researchers from all around the world to build different techniques that can be used to automatically detect offensive content. Various definitions and aspects have been used to tackle this task. Having a typology that can be clearly agreed upon by humans is of great importance. Mubarak et al. (2017) have used the term abusive speech to refer offensive text that contains profane content. On other hand, Hate speech (Toxic comments) is often used to refer to offensive text that is targeted towards a certain person or a group of people based on a common trait (race, ethnicity, religion, etc.) (Malmasi and Zampieri, 2017) . The competition is composed of two subtasks. Subtask A aims at differentiating between offensive and non-offensive text irrespective of the type of the offensive text (Hate Speech, Profanity, Cyber-bullying, etc). Substask B focuses on detecting text that contains targeted Hate Speech towards a person or a group of people.", "cite_spans": [ { "start": 1197, "end": 1218, "text": "Mubarak et al. (2017)", "ref_id": "BIBREF7" }, { "start": 1514, "end": 1542, "text": "(Malmasi and Zampieri, 2017)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Background and task description", "sec_num": "1." }, { "text": "Lately, Fine-tuning large models using the idea of transfer learning such as: BERT (Devlin et al., 2019) and ULMFiT (Howard and Ruder, 2018) that are pre-trained on language modeling tasks reaches state-of-the-art results in multiple classification tasks. For this competition, we have focused on Subtask A and tested different models/ architectures keeping in mind that fine-tuning BERT based models should be among the top performing ones. The best performing model for subtask A was then adapted to work on subtask B as well. The following models were developed throughout our experiments 1 :", "cite_spans": [ { "start": 83, "end": 104, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF2" }, { "start": 116, "end": 140, "text": "(Howard and Ruder, 2018)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Systems description", "sec_num": "2." }, { "text": "\u2022 Training a basic model using tf-idf (term frequencyinverse document frequency) and logistic regression. The tf-idf generates a sparse representation of the input text using character ngrams in range [1, 9] e.g: Some of the grams of the sentence are . This sparse feature vector is then fed to the logistic regression model to discriminate between the two classes (offensive and non-offensive). This model represents the baseline model for all other deep learning based architectures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systems description", "sec_num": "2." }, { "text": "\u2022 Training a 1D Convolutional Layer using word embeddings from Aravec (Mohammad et al., 2017) as a 2D input array. At first, the line-feed token is replaced by a newline character \\n. Then, the sentence is cleaned in the way that is used by the Aravec model. This step includes the removal of diacritics and fixing elongated words (Replacing any sequence of the same character of length two or more by a sequence of length two of the same character). Then, the sentence is tokenized using whitespaces. The tokens are mapped to their respective index in the word2vec model using 0 as the index for any unknown token.", "cite_spans": [ { "start": 70, "end": 93, "text": "(Mohammad et al., 2017)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Systems description", "sec_num": "2." }, { "text": "The list of ids is then padded by the id 0 such that it has a fixed length of 75 ids. The list is truncated to have the length of 75 in case it had more than 75 tokens. The list of ids is then used to generate the respective word embeddings. The word embeddings are concatenated to form a 2D array of shape(75, 300) where 300 is the size of the word embedding for each token. 100 different 1D convolutional filters are then applied to the 2D array with kernel size of 3 and stride of 1 (e.g: the filter is applied to the word embeddings of all 3consecutive tokens). A 1D max-pooling layer is then applied with a pool size of 4. Drop-out with probability of 0.5 succeeds the max pooling layer then a Dense layer of 1 neuron with a sigmoid activation function is used to predict the probability that the sentence is offensive or not. The model is trained for 2 epochs with L2 regularization (penalty factor is set to 0.0001).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systems description", "sec_num": "2." }, { "text": "The used cost function is binary cross entropy and it's optimized using Adam (Kingma and Ba, 2014) . The initial word vectors will also be fine-tuned during the training process to minimize the cost function.", "cite_spans": [ { "start": 77, "end": 98, "text": "(Kingma and Ba, 2014)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Systems description", "sec_num": "2." }, { "text": "\u2022 Training a Bi-directional LSTM using word embeddings from Aravec. Only the most occurring 300,000 words of the Aravec vocabulary are kept and finetuned as part of the model due to the limited GPU memory. After the embedding layer, a bidirectional LSTM layer of 64 cells is used followed by two dense layers of 64 neurons with relu activation function and 1 neuron with a sigmoid activation function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systems description", "sec_num": "2." }, { "text": "\u2022 Fine-tuning multilingual BERT that is pre-trained on cased text of the top 104 languages with the largest Wikipedias (which includes Arabic). The text is tokenized using a word piece tokenizer (Wu et al., 2016) which is trained on large text in an unsupervised fashion to determine a set of word-pieces that form the words (e.g: the word unaffable might be split to (un, ##aff, ##able) according the word-pieces that were generated on training the tokenizer). After tokenizing the input text, the tokens are padded/truncated to the length of 75. BERT generates an embedding for the whole sentence using its self-attention layers. A Dense layer with softmax activation is then added to classify the sentence into offensive or not. The whole pretrained architecture in addition to the added dense layer are then fine-tuned using the tagged dataset. The model is fine-tuned for three epochs using a learning rate of 10 -5 and with L2 regularization.", "cite_spans": [ { "start": 195, "end": 212, "text": "(Wu et al., 2016)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Systems description", "sec_num": "2." }, { "text": "\u2022 Fine-tuning AraBERT (a publicly released BERT model trained on Arabic text 2 ). The text is tokenized using Farasa (Abdelali et al., 2016) which is a segmenter that is developed to segment an Arabic word into its affixes. Then, the tokens are fed to the BERT model. The default values provided by the model's authors were used in the fine-tuning process. The training dataset was divided into batches of size 32, 2 The initial version of AraBERT can be found through:", "cite_spans": [ { "start": 117, "end": 140, "text": "(Abdelali et al., 2016)", "ref_id": "BIBREF0" }, { "start": 415, "end": 416, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Systems description", "sec_num": "2." }, { "text": "https://github.com/zaidalyafeai/ARBML/ issues/18#issuecomment-580924000", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systems description", "sec_num": "2." }, { "text": "where each sample was tokenized to have a length of 64. Six epochs were used to fine-tune the pre-trained AraBERT model on the training dataset of 7000 samples with a learning rate of 10 -5 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systems description", "sec_num": "2." }, { "text": "Moreover, We have built a list of profanity words and used simple augmentation rules to generate the different forms of each word. Mubarak et. al (2017) have demonstrated the effectiveness of using a list of words to detect abusive content in text documents. They used a seed list of bad words and collected user data from twitter to find other candidate words that: 1) are used by those who have any of the seed words in their tweets. 2) aren't used by those who don't have any of the seed words in their tweets. We build on the same idea of having a list of profanity words to automatically mark some tweets as offensive irrespective of their context but we have used a morphological approach for augmenting our seed list of bad words. First, we used a list of bad words that is available online 3 . The list of bad words was manually augmented to include other common forms of an Arabic word by substituting (Taa-marbuta) with (Haa) and substituting (Zain) with (Zaal). Then, the list was further augmented by other bad words that could be found in the training data-set using manual inspection. Finally, a list of prefixes and suffixes were used to generate the different morphological forms of each word. For example, if the word was a verb then the list of prefixes to be added would be ( ) and the list of suffixes would be ( ). e.g.: For the verb , 113 different morphological forms are generated. The following words represent a sample of these forms:", "cite_spans": [ { "start": 131, "end": 152, "text": "Mubarak et. al (2017)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Systems description", "sec_num": "2." }, { "text": "A seed list of 87 bad words was augmented to reach 5497 different words. Some combinations of the prefixes and suffixes might result in a word that is not linguistically valid but our intuition is since the word isn't part of the language then nobody will use it and thus considering a word that is impossible to be used to be a bad word won't affect the model's precision. Throughout our experiments, we have faced problems with reproducing the results for models that are trained using GPUs among multiple runs given that we had used a random seed of value 42 in all our experiments. This seems like a problem that isn't widely discussed. The reproducibility problem can be partially mitigated by training the model multiple times while saving the trained weights for each training run and then choosing the best performing version of the model. Table 1 reports the accuracy and the macro-averaged precision, recall and F1 scores for the training and development datasets respectively on subtask A. Our best model (Zampieri et al., 2019) competition reported that 7 out of the top 10 teams have used BERT to build their models. Risch, et al. (2019) have also showed that using a BERT model that is trained using large German corpora performs better than all the other baseline models. The AraBERT based model was also succeeded by a simple look-up search that marks a sentence as offensive if it contains any of the words in the augmented profanity words list irrespective of the prediction of the AraBERT model. Using this hybrid approach has improved the macro-averaged precision and recall and consequently improved the macroaveraged F1 score as shown in table 2. The official macroaveraged F1 score of this hybrid system on the test and development datasets is 0.896 which is much better than that of our second best system that is based on the Bidirectional LSTM which achieved an official score of 0.856. For subtask B, We have fine-tuned AraBERT using the whole training dataset of 7000 tweets with the same configuration and hyperparameters that were used in subtask A. Our official macro-averaged F1 score is 0.807 which put our team in the third place on the scoreboard.", "cite_spans": [ { "start": 1016, "end": 1039, "text": "(Zampieri et al., 2019)", "ref_id": "BIBREF10" }, { "start": 1130, "end": 1150, "text": "Risch, et al. (2019)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 848, "end": 855, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Systems description", "sec_num": "2." }, { "text": "One of the important steps to carry-out on training a machine learning model is to check the mis-classified samples and try to find reasonable explanations for such errors. This task might be hard for text data since one can't easily find relations between different samples unlike images for example. On checking a random sample of 50 mis-classified samples, we found that most of the errors were False Negatives (The sample is offensive yet it was classified as not offensive). Additionally, we found that all these samples contained the Arabic vocative article (Ya). This seemed like a really serious problem that needs to be fixed until we discovered that (6986 out of 7000) of the sentences in the training and (999 out of 1000) of the sentences in the development data-sets contain the article (Ya). The effect of such observation on the model needs more analysis but clearly this article was used by the data-set creators to query sentences (tweets) and it might limit the distribution of the corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "4." }, { "text": "Human annotation is a tiresome task especially in the field of natural language processing since text might sometimes be ambiguous in a way that the same sentence might carry different meanings. In this section, we will shed the lights on different issues that we have spotted on performing error analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Issues with the Annotation scheme", "sec_num": "4.1." }, { "text": "Presence of a bad word in a non-negative context: The way people perceive and use bad words might depend on different factors such as: the dialect that they use or their society's culture. Some words might be accepted in some regions but are completely inappropriate in other regions. Additionally, Annotators might neglect the presence of a bad word if the context isn't offensive while others consider the whole sentence to be offensive if it contains a bad word. Table 3 demonstrates the disagreement problem between human annotators where the same bad word (with different forms) was found in a non-offensive context. Annotators have considered the first to be not offensive but marked the second one as offensive.", "cite_spans": [], "ref_spans": [ { "start": 466, "end": 473, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Issues with the Annotation scheme", "sec_num": "4.1." }, { "text": "Usage of sarcastic speech quoting popular movie scenes: Our Arabic culture relies heavily on quoting conversations from popular movies. The semantic meaning of these words might be offensive but the pragmatic meaning will depend on the context in which they are used. Ambiguity is an issue that rises in almost all the systems that operate on linguistic data. Table 4 shows two examples where quotes from movies were used. Although the fact that the model can only depend on the semantic meaning of the sentence, we believe that annotators should pick a side and mark them as either offensive or not. The two sentences have offensive speech yet one of them was annotated as offensive and the other was annotated as non offensive. Wrong annotations: Having errors in annotations generated by humans is a problem that is almost unavoidable especially if the dataset was of a large size (10,000 tweets) and annotators are asked to provide two different labels for each tweet (Offensive or not offensive and Hate speech or not hate speech). In table 5, we believe that all these samples should have marked as offensive and as hate speech.", "cite_spans": [], "ref_spans": [ { "start": 360, "end": 367, "text": "Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Issues with the Annotation scheme", "sec_num": "4.1." }, { "text": "Our experiments reveals that the contextualized word embeddings generated using BERT yield better classifiers for offensive text detection. A BERT model that is pre-trained on large text corpora achieves state-of-the-art results. On the other hand, multilingual BERT seemed to lack the ability to represent Arabic text. This might be attributed to the fact that Arabic text needs to be tokenized in a different way than the other languages that are supported by multilingual BERT. Additionally, using a hybrid approach improved our system that is used for subtask A. Relying on a manually prepared list to mark a sentence that contains a profane word as offensive is a logical solution to support machine learning based models. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5." }, { "text": "The source code for the developed models can be found through:https://github.com/AMR-KELEG/ offenseval-2020-ASU_OPTO", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Farasa: A fast and furious segmenter for arabic", "authors": [ { "first": "A", "middle": [], "last": "Abdelali", "suffix": "" }, { "first": "K", "middle": [], "last": "Darwish", "suffix": "" }, { "first": "N", "middle": [], "last": "Durrani", "suffix": "" }, { "first": "H", "middle": [], "last": "Mubarak", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abdelali, A., Darwish, K., Durrani, N., and Mubarak, H. (2016). Farasa: A fast and furious segmenter for arabic.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Bertje: A dutch bert model", "authors": [ { "first": "W", "middle": [], "last": "De Vries", "suffix": "" }, { "first": "A", "middle": [], "last": "Van Cranenburgh", "suffix": "" }, { "first": "A", "middle": [], "last": "Bisazza", "suffix": "" }, { "first": "T", "middle": [], "last": "Caselli", "suffix": "" }, { "first": "G", "middle": [], "last": "Van Noord", "suffix": "" }, { "first": "M", "middle": [], "last": "Nissim", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "de Vries, W., van Cranenburgh, A., Bisazza, A., Caselli, T., van Noord, G., and Nissim, M. (2019). Bertje: A dutch bert model. ArXiv, abs/1912.09582.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "J", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "M.-W", "middle": [], "last": "Chang", "suffix": "" }, { "first": "K", "middle": [], "last": "Lee", "suffix": "" }, { "first": "K", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2019). BERT: Pre-training of deep bidirectional trans- formers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Pa- pers), pages 4171-4186, Minneapolis, Minnesota, June. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Universal language model fine-tuning for text classification", "authors": [ { "first": "J", "middle": [], "last": "Howard", "suffix": "" }, { "first": "S", "middle": [], "last": "Ruder", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "328--339", "other_ids": {}, "num": null, "urls": [], "raw_text": "Howard, J. and Ruder, S. (2018). Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 328-339, Melbourne, Australia, July. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "D", "middle": [ "P" ], "last": "Kingma", "suffix": "" }, { "first": "J", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6980" ] }, "num": null, "urls": [], "raw_text": "Kingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Challenges in discriminating profanity from hate speech", "authors": [ { "first": "S", "middle": [], "last": "Malmasi", "suffix": "" }, { "first": "M", "middle": [], "last": "Zampieri", "suffix": "" } ], "year": 2017, "venue": "Journal of Experimental & Theoretical Artificial Intelligence", "volume": "30", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Malmasi, S. and Zampieri, M. (2017). Challenges in discriminating profanity from hate speech. Journal of Experimental & Theoretical Artificial Intelligence, 30(2):187202, Dec.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Aravec: A set of arabic word embedding models for use in arabic nlp", "authors": [ { "first": "A", "middle": [ "B" ], "last": "Mohammad", "suffix": "" }, { "first": "K", "middle": [], "last": "Eissa", "suffix": "" }, { "first": "S", "middle": [], "last": "El-Beltagy", "suffix": "" } ], "year": 2017, "venue": "Procedia Computer Science", "volume": "117", "issue": "", "pages": "256--265", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohammad, A. B., Eissa, K., and El-Beltagy, S. (2017). Aravec: A set of arabic word embedding models for use in arabic nlp. Procedia Computer Science, 117:256-265, 11.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Abusive language detection on Arabic social media", "authors": [ { "first": "H", "middle": [], "last": "Mubarak", "suffix": "" }, { "first": "K", "middle": [], "last": "Darwish", "suffix": "" }, { "first": "W", "middle": [], "last": "Magdy", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the First Workshop on Abusive Language Online", "volume": "", "issue": "", "pages": "52--56", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mubarak, H., Darwish, K., and Magdy, W. (2017). Abu- sive language detection on Arabic social media. In Pro- ceedings of the First Workshop on Abusive Language On- line, pages 52-56, Vancouver, BC, Canada, August. As- sociation for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "hpidedis at germeval 2019: Offensive language identification using a german bert model", "authors": [ { "first": "J", "middle": [], "last": "Risch", "suffix": "" }, { "first": "A", "middle": [], "last": "Stoll", "suffix": "" }, { "first": "M", "middle": [], "last": "Ziegele", "suffix": "" }, { "first": "R", "middle": [], "last": "Krestel", "suffix": "" } ], "year": 2019, "venue": "Preliminary proceedings of the 15th Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "403--408", "other_ids": {}, "num": null, "urls": [], "raw_text": "Risch, J., Stoll, A., Ziegele, M., and Krestel, R. (2019). hpidedis at germeval 2019: Offensive language iden- tification using a german bert model. In Preliminary proceedings of the 15th Conference on Natural Lan- guage Processing (KONVENS 2019). Erlangen, Ger- many: German Society for Computational Linguistics & Language Technology, pages 403-408.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Google's neural machine translation system", "authors": [ { "first": "Y", "middle": [], "last": "Wu", "suffix": "" }, { "first": "M", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Z", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Q", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "M", "middle": [], "last": "Norouzi", "suffix": "" }, { "first": "W", "middle": [], "last": "Macherey", "suffix": "" }, { "first": "M", "middle": [], "last": "Krikun", "suffix": "" }, { "first": "Y", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Q", "middle": [], "last": "Gao", "suffix": "" }, { "first": "K", "middle": [], "last": "Macherey", "suffix": "" }, { "first": "J", "middle": [], "last": "Klingner", "suffix": "" }, { "first": "A", "middle": [], "last": "Shah", "suffix": "" }, { "first": "M", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "X", "middle": [], "last": "Liu", "suffix": "" }, { "first": "", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "S", "middle": [], "last": "Gouws", "suffix": "" }, { "first": "Y", "middle": [], "last": "Kato", "suffix": "" }, { "first": "T", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "H", "middle": [], "last": "Kazawa", "suffix": "" }, { "first": "K", "middle": [], "last": "Stevens", "suffix": "" }, { "first": "G", "middle": [], "last": "Kurian", "suffix": "" }, { "first": "N", "middle": [], "last": "Patil", "suffix": "" }, { "first": "W", "middle": [], "last": "Wang", "suffix": "" }, { "first": "C", "middle": [], "last": "Young", "suffix": "" }, { "first": "J", "middle": [], "last": "Smith", "suffix": "" }, { "first": "J", "middle": [], "last": "Riesa", "suffix": "" }, { "first": "A", "middle": [], "last": "Rudnick", "suffix": "" }, { "first": "O", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "G", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "M", "middle": [], "last": "Hughes", "suffix": "" }, { "first": "J", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2016, "venue": "Bridging the gap between human and machine translation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wu, Y., Schuster, M., Chen, Z., Le, Q. V., Norouzi, M., Macherey, W., Krikun, M., Cao, Y., Gao, Q., Macherey, K., Klingner, J., Shah, A., Johnson, M., Liu, X., ukasz Kaiser, Gouws, S., Kato, Y., Kudo, T., Kazawa, H., Stevens, K., Kurian, G., Patil, N., Wang, W., Young, C., Smith, J., Riesa, J., Rudnick, A., Vinyals, O., Corrado, G., Hughes, M., and Dean, J. (2016). Google's neural machine translation system: Bridging the gap between human and machine translation.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "SemEval-2019 task 6: Identifying and categorizing offensive language in social media (OffensEval)", "authors": [ { "first": "M", "middle": [], "last": "Zampieri", "suffix": "" }, { "first": "S", "middle": [], "last": "Malmasi", "suffix": "" }, { "first": "P", "middle": [], "last": "Nakov", "suffix": "" }, { "first": "S", "middle": [], "last": "Rosenthal", "suffix": "" }, { "first": "N", "middle": [], "last": "Farra", "suffix": "" }, { "first": "R", "middle": [], "last": "Kumar", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 13th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "75--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zampieri, M., Malmasi, S., Nakov, P., Rosenthal, S., Farra, N., and Kumar, R. (2019). SemEval-2019 task 6: Iden- tifying and categorizing offensive language in social me- dia (OffensEval). In Proceedings of the 13th Interna- tional Workshop on Semantic Evaluation, pages 75-86, Minneapolis, Minnesota, USA, June. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "TABREF0": { "html": null, "num": null, "type_str": "table", "text": "Results of the developed models on the training and development datasets", "content": "
Training datasetDevelopment dataset
Model nameAccuracy P recision Recall F 1Accuracy P recision Recall F 1
tfidf + logistic regression 0.8890.9380.7250.778 0.8880.9210.6940.746
CNN + Aravec0.9820.9850.9590.971 0.9280.9060.8380.867
BiLSTM0.9990.9980.9980.998 0.9200.8560.8840.869
Multi-lingual BERT0.9780.9750.9560.965 0.9050.8550.8050.826
AraBERT0.9980.9980.9940.996 0.9280.8810.8710.876
" }, "TABREF1": { "html": null, "num": null, "type_str": "table", "text": "Effect of using the list of profane words on the fine-tuned AraBERT reported on the development dataset", "content": "
Model nameAccuracy P recision Recall F 1
AraBERT0.9280.8810.8710.876
AraBERT + augmented list of profane words 0.9300.8830.8770.880
for subtask A was the AraBERT based model which per-
formed better than the cased multilingual BERT model that
is trained using the dumps of the 104 most represented lan-
guages on wikipedia. Researchers focusing on langauges
other than English have found that a BERT model trained
specifically for a certain language such as: German, Greek
and Dutch (de Vries et al., 2019) achieves better results than
the multilingual BERT model that might under-represent
some languages. Additionally, The results of the Offense-
val 2019
" }, "TABREF2": { "html": null, "num": null, "type_str": "table", "text": "", "content": "
: Tweets containing bad words with mixed inconsis-
tent labels
IDTextLabel
2206NOT
OFF
7177OFF
" }, "TABREF3": { "html": null, "num": null, "type_str": "table", "text": "Tweets with offensive semantic meaning and sarcastic pragmatic meaning", "content": "
IDTextLabel
261 RT @USER:OFF
URL
7868 It seems likeNOT
OFF
" }, "TABREF4": { "html": null, "num": null, "type_str": "table", "text": "Tweets containing Offensive content with incor-", "content": "
rect labels
IDTextLabel
7106NOT
OFF
-
NOT
HS
7491 @USER @USERNOT
OFF
-
NOT
HS
7358NOT
OFF
-
NOT
HS
6. Bibliographical References
" } } } }