{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:06:24.065299Z" }, "title": "Overview of OSACT4 Arabic Offensive Language Detection Shared Task", "authors": [ { "first": "Hamdy", "middle": [], "last": "Mubarak", "suffix": "", "affiliation": { "laboratory": "", "institution": "HKBU", "location": { "settlement": "Doha", "country": "Qatar" } }, "email": "humbarak@hbku.edu.qa" }, { "first": "Kareem", "middle": [], "last": "Darwish", "suffix": "", "affiliation": { "laboratory": "", "institution": "HKBU", "location": { "settlement": "Doha", "country": "Qatar" } }, "email": "kdarwish@hbku.edu.qa" }, { "first": "Walid", "middle": [], "last": "Magdy", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Edinburgh", "location": { "settlement": "Edinburgh", "country": "UK" } }, "email": "wmagdy@inf.ed.ac.uk" }, { "first": "Hend", "middle": [], "last": "Al-Khalifa", "suffix": "", "affiliation": { "laboratory": "", "institution": "KSU", "location": { "settlement": "Riyadh", "region": "KSA" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper provides an overview of the offensive language detection shared task at the 4th workshop on Open-Source Arabic Corpora and Processing Tools (OSACT4). There were two subtasks, namely: Subtask A, involving the detection of offensive language, which contains unacceptable or vulgar content in addition to any kind of explicit or implicit insults or attacks against individuals or groups; and Subtask B, involving the detection of hate speech, which contains insults or threats targeting a group based on their nationality, ethnicity, race, gender, political or sport affiliation, religious belief, or other common characteristics. In total, 40 teams signed up to participate in Subtask A, and 14 of them submitted test runs. For Subtask B, 33 teams signed up to participate and 13 of them submitted runs. We present and analyze all submissions in this paper.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "This paper provides an overview of the offensive language detection shared task at the 4th workshop on Open-Source Arabic Corpora and Processing Tools (OSACT4). There were two subtasks, namely: Subtask A, involving the detection of offensive language, which contains unacceptable or vulgar content in addition to any kind of explicit or implicit insults or attacks against individuals or groups; and Subtask B, involving the detection of hate speech, which contains insults or threats targeting a group based on their nationality, ethnicity, race, gender, political or sport affiliation, religious belief, or other common characteristics. In total, 40 teams signed up to participate in Subtask A, and 14 of them submitted test runs. For Subtask B, 33 teams signed up to participate and 13 of them submitted runs. We present and analyze all submissions in this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Offensive speech (vulgar or targeted offense), as an expression of heightened polarization or discord in society, has been on the rise. This is due in part to the large adoption of social media platforms that allow for greater polarization. The OSACT4 shared task provides a platform to bring researchers from around the world to tackle the detection of offensive and hate speech in the realm of Arabic social media. The shared task has two subtasks. Subtask A involves detecting offensive language, which contains explicit or implicit insults or attacks against individuals or groups and includes vulgar and inappropriate language. Subtask B is concerned with detecting hate speech, which contains insults or threats targeting specific groups based on their nationality, ethnicity, race, gender, political or sport affiliation, religious belief, or other common characteristics. The goal of the shared task is to aid research on the identification of offensive content and hate speech in Arabic language Twitter posts. The shared task attracted a large number of participants. In all, 40 and 33 teams signed up to Subtasks A and B respectively. From them, 13 teams submitted test runs to both subtasks, and only one team submitted runs to Subtask A. Of those teams, 11 submitted system description papers (Abdellatif and Elgammal, 2020; Abu Farha and Magdy, 2020; Abuzayed and Elsayed, 2020; Alharbi and Lee, 2020; Djandji et al., 2020; Elmadany et al., 2020; Haddad et al., 2020; Saeed et al., 2020; Hassan et al., 2020; Husain, 2020; Keleg et al., 2020) .", "cite_spans": [ { "start": 1306, "end": 1337, "text": "(Abdellatif and Elgammal, 2020;", "ref_id": null }, { "start": 1338, "end": 1364, "text": "Abu Farha and Magdy, 2020;", "ref_id": null }, { "start": 1365, "end": 1392, "text": "Abuzayed and Elsayed, 2020;", "ref_id": null }, { "start": 1393, "end": 1415, "text": "Alharbi and Lee, 2020;", "ref_id": null }, { "start": 1416, "end": 1437, "text": "Djandji et al., 2020;", "ref_id": null }, { "start": 1438, "end": 1460, "text": "Elmadany et al., 2020;", "ref_id": null }, { "start": 1461, "end": 1481, "text": "Haddad et al., 2020;", "ref_id": null }, { "start": 1482, "end": 1501, "text": "Saeed et al., 2020;", "ref_id": null }, { "start": 1502, "end": 1522, "text": "Hassan et al., 2020;", "ref_id": null }, { "start": 1523, "end": 1536, "text": "Husain, 2020;", "ref_id": null }, { "start": 1537, "end": 1556, "text": "Keleg et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The highest achieved F1 scores for Subtasks A and B were 90.5 (Accuracy = 93.9, Precision = 90.2, and Recall = 90.9) (Hassan et al., 2020) and 95.2 (Accuracy = 95.9, Precision = 95.2, and Recall = 95.9) (Husain, 2020) respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Subtasks A and B used the SemEval 2020 Task 12 Arabic offensive language dataset (OffensEval2020, Subtask A), which contains 10,000 tweets that were manually annotated for offensiveness (labels: OFF or NOT OFF). The subtasks used the same OffensEval2020 training (70% of all tweets), dev (10%), and test (20%) splits. The tweets were extracted from a set of 660k Arabic tweets containing the vocative particle (\"yA\" -O) from April 15 to May 6, 2019. Based on different random samples of tweets, offensive tweets represented less than 2% of of tweets. However, when considering tweets having one vocative particle, the ratio increased to 5%. This particle is mainly used for directing speech to a person or a group. Moreover, when considering the tweets with two vocative articles, the probability of finding offensive tweets increased to 20%. An example offensive statement is (\"yA mqrf yA jbAn h*h tsmY Ksp\" -You disgusting coward. This is called wickedness) 1 . Annotation was performed by a native speaker of Arabic with good understanding of several Arabic dialects. Random samples of 100 tweets (50 offensive and 50 non-offensive) were judged by additional three annotators, and the inter-annotator agreement between them was 0.92 (using Fleiss's Kappa coefficient), which validates the quality of data annotation and indicates that judging the offensiveness of tweets is not difficult in many cases. Offensive tweets containing insults or threats targeting a group based on their nationality, ethnicity, race, gender, political or sport affiliation, religious belief, or other common characteristics, were annotated as Hate Speech (labels: HS or NOT HS). An example tweet containing hate speech is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "2." }, { "text": "(\"Allh yqlEkm yAlbdw yA mjrmyn\" -May Allah remove you O Bedouin. You are criminals). The distribution of the labels in the dataset is shown in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 143, "end": 150, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Dataset", "sec_num": "2." }, { "text": "Train Dev Test Total \u223c % NOT OFF 5,590 821 1,598 8,009 80% OFF 1,410 179 402 1,991 20% NOT HS 6,639 956 1,899 9,494 95% HS 361 44 101 506 5% Table 1 : Distribution of labels for Subtasks A and B Subtask A was concerned with detecting offensive language in general, while Subtask B was concerned with detecting hate speech. Both subtasks used the same train/dev/test splits. For all tweets, some light preprocessing was performed, where user mentions were replaced with @USER, URLs were replaced with URL, and empty lines were replaced with . The data of Subtask B is more imbalanced than Subtask A data as only 5% of the tweets are labeled as hate speech, while 20% of the tweets are labeled as offensive.", "cite_spans": [], "ref_spans": [ { "start": 141, "end": 148, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Label", "sec_num": null }, { "text": "Given the strong imbalance between the number of instances in the different classes across Subtasks A and B, we used the macro-averaged F1-score (F) as the official evaluation measure for both subtasks. Macro-averaging gives equal importance to all classes regardless of their size. Other secondary evaluation measures that we used where Precision (P) and Recall (R) on the positive class (offensive or hate speech tweets) as well as the overall Accuracy (A). Subtasks were hosted on CodaLab platform at the following competition links: Subtask A: https://competitions.codalab. org/competitions/22825 Subtask B: https://competitions.codalab. org/competitions/22826", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Settings and Evaluation", "sec_num": "3." }, { "text": "Participants were allowed to submit up to 10 test runs, and they were asked to specify two submissions as their official runs, which would be scored and put on the leaderboard. If official runs are not specified, the latest submissions from each team were considered as official. We gave teams the freedom to describe the differences between their systems in their papers. The idea behind this is to allow teams to examine the effectiveness of different setups on the test set. Macro-average F1 (F) of the first submission is the official score for Subtasks A and B.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Settings and Evaluation", "sec_num": "3." }, { "text": "We received 43 submissions for Subtask A including 3 failed ones (e.g. incorrect format). For Subtask B, we received 41 submissions including 11 failed ones. Competitions were open from Jan. 21, 2020 until Feb. 19, 2020, and the test sets were available starting on Feb. 13, 2020. Table 2 lists the names of participating teams and their affiliations.", "cite_spans": [], "ref_spans": [ { "start": 281, "end": 288, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Task Settings and Evaluation", "sec_num": "3." }, { "text": "Most teams performed some data preprocessing, which typically involved character normalization, removal of punctuation, diacritics, repeated letters, and non-Arabic tokens. Per the rules of the shared task, we judged up to two runs for every team (first submission and second submission).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methods and Results", "sec_num": "4." }, { "text": "As shown in Tables 4 and 5, first submission from all teams always beat their second submission (column F), meaning that best performing systems on the dev set also performed best on the test set as well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methods and Results", "sec_num": "4." }, { "text": "This paper presented an overview of the OSACT4 shared task on offensive language and hate speech detection in the Arabic Twitter sphere. The most successful systems in the shared task performed Arabic specific preprocessing, with the winning system for hate speech detection performing extensive preprocessing, and an ensemble of different machine learning approaches, with the winning system for offensive language detection using an ensemble of SVM trained on character-level n-grams and pretrained embeddings (Mazajak) as well as different DNN setups that use FastText, CNN+RNN, and contextual embeddings (multilingual BERT). numbers, usernames, hashtags, and hyperlinks replacement with NUM, USER, HASH, and URL respectively; character normalization; and diacritic removal fine tuned multilingual BERT-based affective models (Haddad et al., 2020) removed non-Arabic words, diacritization, punctuation, emoticons, stopwords, and repeated characters convolutional neural network (CNN) and bidirectional recurrent neural network with GRU units (Bi-GRU) models with and without attention (Saeed et al., 2020) letter normalization, repeated letter removal, and word splitting DNN: CNN and RNN using contextual embeddings (multilingual BERT) and non-contextual embeddings (Aravec, FastText, word2vec) with an ensemble classifier that combines all outputs using SVM, RF, NB, etc. (Hassan et al., 2020) diacritic, kashida, repeated letter, and non-Arabic character removal ensemble of SVM (character n-grams) and pretrained embeddings (Mazajak) and DNN: Fast-Text (subword), CNN+RNN, and contextual embeddings (multilingual BERT) (Husain, 2020) Intensive preprocessing: normalizing emoticons, dialectal to MSA conversion, word category identification (ex. animals), letter normalization, stopword removal, and hashtag segmentation SVM (character n-grams) (Keleg et al., 2020) word segmentation LR; DNN: CNN (with Aravec), RNN, and contextual embeddings (multilingual BERT and AraBert) ", "cite_spans": [ { "start": 829, "end": 850, "text": "(Haddad et al., 2020)", "ref_id": null }, { "start": 1088, "end": 1108, "text": "(Saeed et al., 2020)", "ref_id": null }, { "start": 1364, "end": 1398, "text": "RF, NB, etc. (Hassan et al., 2020)", "ref_id": null }, { "start": 1626, "end": 1640, "text": "(Husain, 2020)", "ref_id": null }, { "start": 1851, "end": 1871, "text": "(Keleg et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5." }, { "text": "We provide Arabic examples, their Buckwalter transliteration, and English translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Affiliation Subtasks Abeer (Abuzayed and Elsayed, 2020) Islamic University of Gaza, Palestine 1,2 aialharbi (Alharbi and Lee, 2020) University of Birmingham, UK 1,2 alisafaya Ko\u00e7 University, Turkey 1,2 alt (Hassan et al., 2020) Qatar Computing Research Institute, Qatar 1,2 AMR- KELEG (Keleg et al., 2020) Faculty Alharbi, A. and Lee, M. (2020). Combining character and word embeddings for the detection of offensive language in arabic. OSACT, 4. Djandji, M., Baly, F., antoun, w., and Hajj, H. (2020).Multi-task learning using arabert for offensive language detection. OSACT, 4. Elmadany, A., Zhang, C., Abdul-Mageed, M., and Hashemi, A. (2020) . Leveraging affective bidirectional transformers for offensive language detection. OSACT, 4. Haddad, B., Orabe, Z., Al-Abood, A., and Ghneim, N. (2020) . Arabic offensive language detection with attention-based deep neural networks. OSACT, 4. Hassan, S., Samih, Y., Mubarak, H., Abdelali, A., Rashed, A., and Absar Chowdhury, S. (2020) . Alt submission for osact shared task on offensive language detection. OS-ACT, 4. Husain, F. (2020). Osact4 shared task on offensive language detection: Intensive preprocessing based approach. OSACT, 4. Keleg, A., El-Beltagy, S. R., and Khalil, M. (2020) .Asu opto at osact4 -offensive language detection for arabic text. OSACT, 4. Saeed, H. H., Calders, T., and Kamiran, F. (2020). Ocast4 shared tasks: Ensembled stacked classification for offensive and hate speech in arabic tweets. OSACT, 4.", "cite_spans": [ { "start": 27, "end": 55, "text": "(Abuzayed and Elsayed, 2020)", "ref_id": null }, { "start": 121, "end": 131, "text": "Lee, 2020)", "ref_id": null }, { "start": 206, "end": 227, "text": "(Hassan et al., 2020)", "ref_id": null }, { "start": 279, "end": 305, "text": "KELEG (Keleg et al., 2020)", "ref_id": null }, { "start": 605, "end": 645, "text": "Abdul-Mageed, M., and Hashemi, A. (2020)", "ref_id": null }, { "start": 763, "end": 798, "text": "Al-Abood, A., and Ghneim, N. (2020)", "ref_id": null }, { "start": 940, "end": 982, "text": "Rashed, A., and Absar Chowdhury, S. (2020)", "ref_id": null }, { "start": 1198, "end": 1238, "text": "El-Beltagy, S. R., and Khalil, M. (2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Team", "sec_num": null } ], "bib_entries": {}, "ref_entries": { "FIGREF0": { "text": "Abdellatif, M. and Elgammal, A. (2020). Sentiment analysis of imbalanced arabic data using ulmfit. OSACT, 4. Abu Farha, I. and Magdy, W. (2020). Multitask learning for arabic offensive language and hate-speech detection. OSACT, 4. Abuzayed, A. and Elsayed, T. (2020). Quick and simple approach for detecting hate speech in arabic tweets. OS-ACT, 4.", "type_str": "figure", "uris": null, "num": null }, "FIGREF1": { "text": "diacritic, kashida, repeated letter, and non-Arabic character removal SVM, Random Forest, XGBoost, Extra Trees, Decision Trees, Gradient Boosting, and LR; DNN: CNN, RNN, CNN+RNN and two different word representations (tf-idf and pre-trained word embeddings (AraVec) (Alharbi and Lee, 2020) character normalization and diacritic, kashida, repeated letter, and non-Arabic character removal. Also split (\"yA\") LR and XGBoost; DNN: RNN using Mazajak, Aravec, and subword FastText embeddings (Djandji et al., 2020) removal of non-Arabic characters, segmentation of words using Farasa segmenter, and splitting of hashtags fine tuning of contextual embeddings (AraBERT) (Elmadany et al., 2020)", "type_str": "figure", "uris": null, "num": null }, "TABREF1": { "type_str": "table", "html": null, "text": "", "content": "", "num": null }, "TABREF2": { "type_str": "table", "html": null, "text": "Different methods used by different teams. LR: Logistic Regression; SVM: Support Vector Machines; NB: Naive Bayes; DNN: Deep Neural Network; CNN: Convolutional Neural Network; RNN: Recurrent Neural Network", "content": "
TeamFirst SubmissionSecond Submission
FAPRFAPR
(Hassan et al., 2020)90.5 93.9 90.2 90.8 89.4 93.4 90.5 88.3
(Djandji et al., 2020)90.0 93.7 90.7 89.4 88.5 93.1 91.9 85.9
(Husain, 2020)89.8 90.2 89.9 90.2 88.6 89.1 88.6 89.1
(Keleg et al., 2020)89.6 93.5 90.5 88.7 85.6 90.9 86.2 85.0
(Abu Farha and Magdy, 2020)87.8 92.4 88.8 86.8 87.8 92.3 88.5 87.1
(Saeed et al., 2020)87.4 92.4 90.3 85.1 87.8 92.8 91.5 85.1
(Alharbi and Lee, 2020)86.8 92.1 89.6 84.7 85.7 91.2 87.7 84.1
(Haddad et al., 2020)85.9 91.5 88.6 83.8 84.6 90.0 84.1 85.2
alisafaya84.2 90.8 88.4 81.4 81.9 89.5 86.1 79.1
(Abuzayed and Elsayed, 2020)83.3 89.7 84.7 82.1 82.6 89.9 86.8 79.8
(Elmadany et al., 2020)82.9 89.4 84.1 81.8 79.3 87.8 82.5 77.1
(Abdellatif and Elgammal, 2020)77.4 86.2 78.9 76.2 77.4 86.2 78.9 76.2
SAJA76.2 86.4 80.5 73.6
premjithb72.6 81.8 71.9 73.3 72.6 81.8 71.9 73.3
", "num": null }, "TABREF3": { "type_str": "table", "html": null, "text": "", "content": "
: Subtask A results
", "num": null }, "TABREF4": { "type_str": "table", "html": null, "text": "", "content": "", "num": null } } } }