File size: 55,703 Bytes
6fa4bc9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 |
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:06:26.634731Z"
},
"title": "OCAST4 Shared Tasks: Ensembled Stacked Classification for Offensive and Hate Speech in Arabic Tweets",
"authors": [
{
"first": "Hafiz",
"middle": [],
"last": "Hassaan Saeed",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Information Technology University",
"location": {
"country": "Pakistan"
}
},
"email": "hassaan.saeed@itu.edu.pk"
},
{
"first": "Toon",
"middle": [],
"last": "Calders",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Antwerp",
"location": {
"country": "Belgium"
}
},
"email": "toon.calders@uantwerpen.be"
},
{
"first": "Faisal",
"middle": [],
"last": "Kamiran",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Information Technology University",
"location": {
"country": "Pakistan"
}
},
"email": "faisal.kamiran@itu.edu.pk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we describe our submission for the OCAST4 2020 shared tasks on offensive language and hate speech detection in the Arabic language. Our solution builds upon combining a number of deep learning models using pre-trained word vectors. To improve the word representation and increase word coverage, we compare a number of existing pre-trained word embeddings and finally concatenate the two empirically best among them. To avoid under-as well as over-fitting, we train each deep model multiple times, and we include the optimization of the decision threshold into the training process. The predictions of the resulting models are then combined into a tuned ensemble by stacking a classifier on top of the predictions by these base models. We name our approach \"ESOTP\" (Ensembled Stacking classifier over Optimized Thresholded Predictions of multiple deep models). The resulting ESOTP-based system ranked 6th out of 35 on the shared task of Offensive Language detection (sub-task A) and 5th out of 30 on Hate Speech Detection (sub-task B).",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we describe our submission for the OCAST4 2020 shared tasks on offensive language and hate speech detection in the Arabic language. Our solution builds upon combining a number of deep learning models using pre-trained word vectors. To improve the word representation and increase word coverage, we compare a number of existing pre-trained word embeddings and finally concatenate the two empirically best among them. To avoid under-as well as over-fitting, we train each deep model multiple times, and we include the optimization of the decision threshold into the training process. The predictions of the resulting models are then combined into a tuned ensemble by stacking a classifier on top of the predictions by these base models. We name our approach \"ESOTP\" (Ensembled Stacking classifier over Optimized Thresholded Predictions of multiple deep models). The resulting ESOTP-based system ranked 6th out of 35 on the shared task of Offensive Language detection (sub-task A) and 5th out of 30 on Hate Speech Detection (sub-task B).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Social media platforms have become a widely-used mode of communication among individuals or groups from diverse backgrounds. With the increasing freedom of expression in user-generated content on these platforms, the menace of offensive and hate speech is also on the rise (Santosh and Aravind, 2019), causing adverse effects on users and society at large (Lee and Kim, 2015) . Because of this menace, the identification of offensive or hateful statements towards individuals or groups has become a priority nowadays and many social media companies have already invested millions for building automated systems to detect offensive language and hate speech (Gamb\u00e4ck and Sikdar, 2017) . In this paper, we address the task of offensive language and hate speech detection in the Arabic language by presenting our contributions to two shared tasks (A and B) in OCAST4 2020. The objective in shared subtask A is to identify offensive language whereas the objective in shared subtask B is to identify hate speech in the given tweets. We developed a common methodology for both tasks, and executed the classification pipeline twice, once for each of both subtasks. As a first attempt, we applied classical text classification techniques including Na\u00efve Bayes, Logistic Regression, Support Vector Machines, and Random Forests based on the traditional encoding of the tweets as TF.IDF vectors. Subsequently, more advanced deep learning techniques using pre-trained word embeddings were applied and compared to the classical techniques. Both approaches were compared empirically, showing superior performance for the deep models. The superiority of the deep models motivated further exploration in the direction of deep learning. We compared a number of pre-trained word-level embeddings available for Arabic language processing, and in the end, concatenated the two empirically best performing pre-trained word-level embeddings. Using this combined embedding, several network architec-tures and ways of pre-processing were tried out, and the resulting models were combined in a tuned ensemble as follows. The different deep networks were trained and optimized several times, saving their predictions for both tasks. Finally, a classifier was stacked on top of these predictions to combine them in one ensemble. The resulting classifier was further fine-tuned, therefore, we name our approach \"ESOTP\" which stands for Ensembled Stacking classifier over Optimized Thresholded Predictions of multiple deep models.",
"cite_spans": [
{
"start": 356,
"end": 375,
"text": "(Lee and Kim, 2015)",
"ref_id": "BIBREF6"
},
{
"start": 656,
"end": 682,
"text": "(Gamb\u00e4ck and Sikdar, 2017)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Hate speech detection has been studied extensively in recent years, especially for highly-resourced languages like English. (Yin et al., 2009) were among the first ones to apply supervised machine learning approaches in hate speech detection. They applied Support Vector Machines to detect harassment in posts from famous social platforms like MySpace. Similarly, (Warner and Hirschberg, 2012) trained a Support Vector Machine classifier on word ngrams and used it to detect hate speech. In recent years, (Waseem and Hovy, 2016) showed that character n-grams are better than word n-grams as predictive features for hate speech detection. Their best performing model was a Gradient Boosted Decision Trees classifier trained on word embeddings learned using LSTMs. There exists, however, very little literature on the problem of Hate Speech detection in Arabic. Some of the few works are discussed next. (Magdy et al., 2015 ) collected a large number of Arabic tweets and trained a Support Vector Machine classifier to predict if a user supports or opposes ISIS. (Mubarak et al., 2017) proposed a methodology for the detection of profane tweets by using an automatically created and expanded list of obscene and offensive words. (Haidar et al., 2017 ) proposed a multilingual system that detects cyberbullying attacks in both English and Arabic texts. They scrapped the data from Facebook and Twitter. The data collected from Facebook was kept for validating the system. Their proposed system was a multilingual cyberbullying detection system and two machine learning models Naive Bayes and Support Vector Machine were used in it. In another related work, (Albadi et al., 2018) prepared the first publicly available Arabic dataset that was especially annotated for religious hate speech detection. They also developed multiple classifiers using lexicon-based, n-gram-based, and deep learning approaches. They found a simple Recurrent Neural Network (RNN) architecture with Gated Recurrent Units (GRU) and pre-trained word embeddings to be the best performing model for the detection of religious hate speech in Arabic. (Mohaouchane et al., 2019) compared multiple deep models including CNN, BLSTM with Attention, BLSTM and Combined CNN-LSTM for detecting offensive language in Arabic. They showed that CNN-LSTM achieved best recall scores whereas CNN achieved highest f1 scores in 5fold cross validation. Recently, (Chowdhury et al., 2019) proposed ARHNET to detect religious hate speech in Arabic by using word embeddings and social network graphs with deep learning models and improved the classification scores than (Albadi et al., 2018) . The overview of OSACT4 Arabic Offensive Language Detection Shared Task is discussed by (Mubarak et al., 2020) .",
"cite_spans": [
{
"start": 124,
"end": 142,
"text": "(Yin et al., 2009)",
"ref_id": "BIBREF16"
},
{
"start": 364,
"end": 393,
"text": "(Warner and Hirschberg, 2012)",
"ref_id": "BIBREF14"
},
{
"start": 505,
"end": 528,
"text": "(Waseem and Hovy, 2016)",
"ref_id": "BIBREF15"
},
{
"start": 902,
"end": 921,
"text": "(Magdy et al., 2015",
"ref_id": "BIBREF7"
},
{
"start": 1061,
"end": 1083,
"text": "(Mubarak et al., 2017)",
"ref_id": "BIBREF9"
},
{
"start": 1227,
"end": 1247,
"text": "(Haidar et al., 2017",
"ref_id": "BIBREF4"
},
{
"start": 1654,
"end": 1675,
"text": "(Albadi et al., 2018)",
"ref_id": "BIBREF0"
},
{
"start": 2117,
"end": 2143,
"text": "(Mohaouchane et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 2413,
"end": 2437,
"text": "(Chowdhury et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 2617,
"end": 2638,
"text": "(Albadi et al., 2018)",
"ref_id": "BIBREF0"
},
{
"start": 2728,
"end": 2750,
"text": "(Mubarak et al., 2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "Starting from pre-processing, we now discuss the overall methodology (classification pipeline) followed for both subtasks in OCAST4 2020.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3."
},
{
"text": "One pre-processing step was already done over the original tweets by the competition's organizers, i.e., mentions of a specific user were replaced with @USER, URLs were replaced with URL, and empty lines with <LF>. We removed all these replaced tokens along with emoticons, emojis, punctuation marks (both Arabic and English), English characters, digits (both Arabic and English) and Arabic diacritics. We then normalized a few Arabic characters like Hamza, Ya, Ha, and Qaf, and finally removed a repeating character in the string if it is repeated more than 3 times consecutively. An additional pre-processing step is taken for out-of-word-embeddings-vocabulary (OOWEV) words with the models that use pre-trained word embeddings, which is to split an OOWEV word into 2 tokens (i.e., the first character and the rest of the word) if the first character is Wa, Fa, or Sa. The intuition behind this additional step is that Wa, Fa, or Sa appearing at the beginning of an Arabic word function like a grammatical particle as Wa gives added meaning of (\"and\" or \"vow\" or \"oath\"), Fa gives added meaning of (result to a previous statement) and Sa gives added meaning of (in very near future). This way a few more words are covered from the pre-trained word embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-processing",
"sec_num": "3.1."
},
{
"text": "A number of pre-trained word vectors are available for Arabic language processing like FastText (Grave et al., 2018) , Word2Vec 1 (Continuous Skip gram trained over Arabic 1 http://vectors.nlpl.eu/repository/20/31. zip CoNLL17 corpus), AraVec (Soliman et al., 2017) , N-Gram and Uni-Gram models, and recent BERT 2 multilingual vectors. We empirically evaluated these available word embeddings based on the given evaluation metric and concatenated the two best among them which were FastText (300 dimensional vectors) and Word2Vec (100 dimensional vectors) resulting in a 400 dimensional vector representation for words in the corpus. The resulting concatenation of word embeddings yields 4 types of words: type 1) words which exist in both embeddings; type 2) words which exist in the first embedding but do not exist in the second; type 3) words which exist in the second embedding but do not exist in the first; type 4) words which neither exist in the first nor in the second embedding.",
"cite_spans": [
{
"start": 96,
"end": 116,
"text": "(Grave et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 243,
"end": 265,
"text": "(Soliman et al., 2017)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-trained Word Vectors",
"sec_num": "3.2."
},
{
"text": "E 1 E 1 N (\u00b5 1 , \u03c3 1 ) N (\u00b5 1 , \u03c3 1 ) E 2 N (\u00b5 2 , \u03c3 2 ) E 2 N (\u00b5 2 , \u03c3 2 ) W type_1 W type_2 W type_3 W type_4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-trained Word Vectors",
"sec_num": "3.2."
},
{
"text": "FastText 300 D W2V 100 D Figure 1 : Assigning vectors to the respective types of words yielded from the concatenation of FastText and Word2Vec word embeddings.",
"cite_spans": [],
"ref_spans": [
{
"start": 25,
"end": 33,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Pre-trained Word Vectors",
"sec_num": "3.2."
},
{
"text": "The strategy adopted for assigning vectors to all four types of words is shown in Figure 1 and is explained as: Let E 1 be vector components from the FastText embedding, E 2 be vector components from the Word2Vec embedding, \u03bc 1 be the mean of all vectors in FastText, \u03bc 2 be the mean of vectors in Word2Vec, \u03c3 1 be the standard deviation of the vectors in FastText, \u03c3 2 be the standard deviation of the vectors in Word2Vec, then the vectors assigned to the types of words are: type 1) get E 1 and E 2 ; type 2) get E 1 and initialize last 100 components with Gaussian distribution using \u03bc 2 & \u03c3 2 ; type 3) get E 2 and initialize first 300 components with Gaussian distribution using \u03bc 1 & \u03c3 1 ; type 4) initialize first 300 components with Gaussian distribution using \u03bc 1 , \u03c3 1 and last 100 components with Gaussian distribution using \u03bc 2 , \u03c3 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 82,
"end": 90,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Pre-trained Word Vectors",
"sec_num": "3.2."
},
{
"text": "We used four different types of neural architectures for both tasks of offensive language and hate speech detection, namely: 1) Convolutional Neural Networks (CNN) ; 2) Nets based on Bidirectional Long Short-Term Memory (BLSTM); 3) Nets based on Bidirectional Gated Recurrent Units (BGRU); and 4) Nets based on Bidirectional LSTMs with CNN (BLSTM+CNN). We briefly explain these architectures. (Kim, 2014) . The input layer in this architecture is an embedding layer, attached to a 1D spatial dropout layer that is then reshaped to a 2D matrix of M \u00d7 V , where M is maximum length of tweets in the corpus and V is the size of embedding vectors. After reshaping the input, 5 convolutional layers are attached in parallel having 128 kernels in each layer with kernel dimensions ranging from 1 \u00d7 V , to 5 \u00d7 V . All these parallel layers are then attached to a global max-pooling layer and concatenated to make a single feature vector, connected then to a dropout layer, followed by fully connected layers of 100, 50 and 1 units respectively. The activation function in the last layer is a sigmoid whereas for the rest of the network we use the exponential linear unit (ELU) function.",
"cite_spans": [
{
"start": 393,
"end": 404,
"text": "(Kim, 2014)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 158,
"end": 163,
"text": "(CNN)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Models Used",
"sec_num": "3.3."
},
{
"text": "This architecture is taken from (Saeed et al., 2018) . The input to this architecture is an embedding layer followed by a 1D spatial dropout layer, which is then attached to two parallel blocks of Bidirectional Long-Short Term Memory (BLSTM) where the first block has 128 units and the second block 64 units. Global max-pooling and global averagepooling layers are attached to both parallel blocks and are concatenated to make one feature vector, which is then attached to fully connected layers of 100, 50, and 1 units respectively. The activation function in the last layer is a sigmoid whereas the BLSTM layers use the tanh activation function and for the rest of the network we use the exponential linear unit (ELU) function.",
"cite_spans": [
{
"start": 32,
"end": 52,
"text": "(Saeed et al., 2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BLSTM",
"sec_num": "3.3.2."
},
{
"text": "This architecture is also taken from (Saeed et al., 2018) and is similar to the BLSTM architecture. The only difference between this architecture and BLSTM is that we use GRU instead of LSTM. The rest of the architecture is same as that of BLSTM.",
"cite_spans": [
{
"start": 37,
"end": 57,
"text": "(Saeed et al., 2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BGRU",
"sec_num": "3.3.3."
},
{
"text": "This architecture has an input embedding layer connected to a 1D spatial dropout layer. The output from the 1D spatial dropout layer is given as input to a bidirectional LSTM layer with 128 units and then a 1D convolutional layer is attached with 64 kernels of size 4, connected on its turn with a global max-pooling layer, followed by a dropout layer, and again 3 fully connected layers having 100, 50 and 1 units respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BLSTM+CNN",
"sec_num": "3.3.4."
},
{
"text": "The overall classification pipeline is shown in Figure 2 . We train all four models: CNN, BLSTM, BGRU, and BLSTM+CNN, for 250, 200, 70 and 30 times respectively. The decision threshold is optimized for F1 as part of the training phase. We hence get 550 predictions for each sample in the validation set. Using these 550 predictions as a new training set, we built a stacking classifier that is an ensemble of a Na\u00efve Bayes classifier, a Logistic Regression model, a Support Vector Machine, a Nearest Neighbours classifier and a Random Forest. We fine-tune this new Ensembled Stacking Classifier as well. We named our approach \"ESOTP\", which stands for Ensembled Stacking classifier over Optimized Thresholded Predictions of multiple deep models.",
"cite_spans": [],
"ref_spans": [
{
"start": 48,
"end": 56,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Ensembled Stacking Classifier",
"sec_num": "3.4."
},
{
"text": "We used Keras deep learning framework with Tensorflow backend to build our deep classification pipeline. The evaluation metric used to test the classification system is macro averaged f1 score. We report cross-validation scores as our results in this paper, as there was a limit of 10 submissions at maximum per team during the OCAST4 testing phase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimentation & Results",
"sec_num": "4."
},
{
"text": "We tune hyper-parameters of the deep models used in this study by mixing grid search with manual tuning. The hyper-parameters include batch size, optimizers, learning rate, the number of kernels in CNN, the number of units in recurrent layers, and the dropout rates. The hyperparameters in ensembled stacking classifier include penalty, solver and regularization parameter for Logistic Regression; penalty, kernel function, regularization parameter and gamma for Support Vector Machine; values of K in Nearest Neighbours; and number of estimators, splitting criterion and max. depth of trees in Random Forest.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hyper-parameter Tuning",
"sec_num": "4.1."
},
{
"text": "We compared pre-trained word embeddings with CNN architecture over 20 runs only due to time limitations. The average of 20 runs for both subtasks is shown in Table 1 , which shows that Word2Vec and FastText achieved the",
"cite_spans": [],
"ref_spans": [
{
"start": 158,
"end": 166,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Pre-trained Word Vectors",
"sec_num": "4.2."
},
{
"text": "https://github.com/google-research/bert/ blob/master/multilingual.md",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "highest f1 scores on our cross-validation when used as pretrained word vectors, and therefore we selected both these embeddings to concatenate them for the representation of words from both embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
},
{
"text": "The main results are shown in Table 2 . Na\u00efve Bayes (NB), Logistic Regression (LR), Random Forest (RF) and Support Vector Machines (SVM) give lower F1 scores as compared to deep models in our cross-validation. Besides the deep models described in section 3.3., we trained two additional deep models: 1) BLSTM with Attention; 2) BLSTM with some statistical features like number of punctuation marks, number of characters, number of words, number of rare words, number of out-of-vocabulary words, etc. The cross-validation scores showed deterioration instead of improvement, therefore, we ignored them from being into our ensembled stacking classification. The scores in Table 2 87.37% f1 for subtask A (ranked 6/35) and 79.85% for subtask B (ranked 5/30).",
"cite_spans": [],
"ref_spans": [
{
"start": 30,
"end": 37,
"text": "Table 2",
"ref_id": null
},
{
"start": 669,
"end": 676,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Main Results",
"sec_num": "4.3."
},
{
"text": "We present our submission to the shared tasks of offensive language and hate speech detection in OCAST4 2020. To develop a good classification pipeline for both tasks, we select the empirically best word representations using available pre-trained word embeddings with some languagespecific pre-processing, and afterwards compare a number of deep learning approaches. Our final submission is based on fine-tuning a stacking classifier where we use an ensemble of multiple models as the stacking classifier, built over different deep models trained for several times. Our classification pipeline (ESTOP) results in better generalization as compared to individual deep models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
},
{
"text": "We thank Louis Bruyns Foundation, Belgium, for their support in this research study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "6."
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Are they our brothers? analysis and detection of religious hate speech in the arabic twittersphere",
"authors": [
{
"first": "N",
"middle": [],
"last": "Albadi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Kurdi",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Mishra",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM)",
"volume": "",
"issue": "",
"pages": "69--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Albadi, N., Kurdi, M., and Mishra, S. (2018). Are they our brothers? analysis and detection of religious hate speech in the arabic twittersphere. In 2018 IEEE/ACM Interna- tional Conference on Advances in Social Networks Anal- ysis and Mining (ASONAM), pages 69-76. IEEE.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Arhnet -leveraging community interaction for detection of religious hate speech in arabic",
"authors": [
{
"first": "A",
"middle": [
"G"
],
"last": "Chowdhury",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Didolkar",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Sawhney",
"suffix": ""
},
{
"first": "R",
"middle": [
"R"
],
"last": "Shah",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019",
"volume": "2",
"issue": "",
"pages": "273--280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chowdhury, A. G., Didolkar, A., Sawhney, R., and Shah, R. R. (2019). Arhnet -leveraging community interaction for detection of religious hate speech in arabic. In Fer- nando Alva-Manchego, et al., editors, Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28 -August 2, 2019, Volume 2: Student Research Workshop, pages 273-280. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Using convolutional neural networks to classify hate-speech",
"authors": [
{
"first": "B",
"middle": [],
"last": "Gamb\u00e4ck",
"suffix": ""
},
{
"first": "U",
"middle": [
"K"
],
"last": "Sikdar",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Workshop on Abusive Language Online, ALW@ACL 2017",
"volume": "",
"issue": "",
"pages": "85--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gamb\u00e4ck, B. and Sikdar, U. K. (2017). Using convolu- tional neural networks to classify hate-speech. In Pro- ceedings of the First Workshop on Abusive Language On- line, ALW@ACL 2017, Vancouver, BC, Canada, August 4, 2017, pages 85-90.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Learning word vectors for 157 languages",
"authors": [
{
"first": "E",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grave, E., Bojanowski, P., Gupta, P., Joulin, A., and Mikolov, T. (2018). Learning word vectors for 157 lan- guages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation, LREC 2018, Miyazaki, Japan, May 7-12, 2018.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Multilingual cyberbullying detection system: Detecting cyberbullying in arabic content",
"authors": [
{
"first": "B",
"middle": [],
"last": "Haidar",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Chamoun",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Serhrouchni",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 1st Cyber Security in Networking Conference (CSNet)",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haidar, B., Chamoun, M., and Serhrouchni, A. (2017). Multilingual cyberbullying detection system: Detecting cyberbullying in arabic content. In 2017 1st Cyber Se- curity in Networking Conference (CSNet), pages 1-8. IEEE.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1746--1751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kim, Y. (2014). Convolutional neural networks for sen- tence classification. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Pro- cessing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1746-1751.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Why people post benevolent and malicious comments online",
"authors": [
{
"first": "S",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2015,
"venue": "Commun. ACM",
"volume": "58",
"issue": "11",
"pages": "74--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lee, S. and Kim, H. (2015). Why people post benevo- lent and malicious comments online. Commun. ACM, 58(11):74-79.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "# failedrevolutions: Using twitter to study the antecedents of isis support",
"authors": [
{
"first": "W",
"middle": [],
"last": "Magdy",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Darwish",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Weber",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1503.02401"
]
},
"num": null,
"urls": [],
"raw_text": "Magdy, W., Darwish, K., and Weber, I. (2015). # faile- drevolutions: Using twitter to study the antecedents of isis support. arXiv preprint arXiv:1503.02401.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Detecting offensive language on arabic social media using deep learning",
"authors": [
{
"first": "H",
"middle": [],
"last": "Mohaouchane",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mourhir",
"suffix": ""
},
{
"first": "N",
"middle": [
"S"
],
"last": "Nikolov",
"suffix": ""
}
],
"year": 2019,
"venue": "Sixth International Conference on Social Networks Analysis",
"volume": "",
"issue": "",
"pages": "466--471",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohaouchane, H., Mourhir, A., and Nikolov, N. S. (2019). Detecting offensive language on arabic social media us- ing deep learning. In Mohammad A. Alsmirat et al., editors, Sixth International Conference on Social Net- works Analysis, Management and Security, SNAMS 2019, Granada, Spain, October 22-25, 2019, pages 466- 471. IEEE.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Abusive language detection on arabic social media",
"authors": [
{
"first": "H",
"middle": [],
"last": "Mubarak",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Darwish",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Magdy",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Workshop on Abusive Language Online",
"volume": "",
"issue": "",
"pages": "52--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mubarak, H., Darwish, K., and Magdy, W. (2017). Abu- sive language detection on arabic social media. In Pro- ceedings of the First Workshop on Abusive Language On- line, pages 52-56.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Overview of osact4 arabic offensive language detection shared task",
"authors": [
{
"first": "H",
"middle": [],
"last": "Mubarak",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Darwish",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Magdy",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Elsayed",
"suffix": ""
},
{
"first": "Al-Khalifa",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT)",
"volume": "4",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mubarak, H., Darwish, K., Magdy, W., Elsayed, T., and Al-Khalifa, H. (2020). Overview of osact4 arabic offen- sive language detection shared task. In Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT), volume 4.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Overlapping toxic sentiment classification using deep neural architectures",
"authors": [
{
"first": "H",
"middle": [
"H"
],
"last": "Saeed",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Shahzad",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Kamiran",
"suffix": ""
}
],
"year": 2018,
"venue": "IEEE International Conference on Data Mining Workshops, ICDM Workshops",
"volume": "",
"issue": "",
"pages": "1361--1366",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saeed, H. H., Shahzad, K., and Kamiran, F. (2018). Over- lapping toxic sentiment classification using deep neural architectures. In 2018 IEEE International Conference on Data Mining Workshops, ICDM Workshops, Singapore, Singapore, November 17-20, 2018, pages 1361-1366. IEEE.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Hate speech detection in hindi-english code-mixed social media text",
"authors": [
{
"first": "T",
"middle": [
"Y S S"
],
"last": "Santosh",
"suffix": ""
},
{
"first": "K",
"middle": [
"V S"
],
"last": "Aravind",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the ACM India Joint International Conference on Data Science and Management of Data, COMAD/CODS 2019",
"volume": "",
"issue": "",
"pages": "310--313",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Santosh, T. Y. S. S. and Aravind, K. V. S. (2019). Hate speech detection in hindi-english code-mixed social me- dia text. In Proceedings of the ACM India Joint Inter- national Conference on Data Science and Management of Data, COMAD/CODS 2019, Kolkata, India, January 3-5, 2019, pages 310-313.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Aravec: A set of arabic word embedding models for use in arabic NLP",
"authors": [
{
"first": "A",
"middle": [
"B"
],
"last": "Soliman",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Eissa",
"suffix": ""
},
{
"first": "S",
"middle": [
"R"
],
"last": "El-Beltagy",
"suffix": ""
}
],
"year": 2017,
"venue": "Third International Conference On Arabic Computational Linguistics",
"volume": "",
"issue": "",
"pages": "256--265",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soliman, A. B., Eissa, K., and El-Beltagy, S. R. (2017). Aravec: A set of arabic word embedding models for use in arabic NLP. In Third International Conference On Arabic Computational Linguistics, ACLING 2017, November 5-6, 2017, Dubai, United Arab Emirates, pages 256-265.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Detecting hate speech on the world wide web",
"authors": [
{
"first": "W",
"middle": [],
"last": "Warner",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hirschberg",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Second Workshop on Language in Social Media",
"volume": "",
"issue": "",
"pages": "19--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Warner, W. and Hirschberg, J. (2012). Detecting hate speech on the world wide web. In Proceedings of the Second Workshop on Language in Social Media, pages 19-26, Montr\u00e9al, Canada, June. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Hateful symbols or hateful people? predictive features for hate speech detection on twitter",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Waseem",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the NAACL student research workshop",
"volume": "",
"issue": "",
"pages": "88--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Waseem, Z. and Hovy, D. (2016). Hateful symbols or hate- ful people? predictive features for hate speech detec- tion on twitter. In Proceedings of the NAACL student research workshop, pages 88-93.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Detection of harassment on web 2.0",
"authors": [
{
"first": "D",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "B",
"middle": [
"D"
],
"last": "Davison",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Kontostathis",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Edwards",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Content Analysis in the WEB",
"volume": "2",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yin, D., Xue, Z., Hong, L., Davison, B. D., Kontostathis, A., and Edwards, L. (2009). Detection of harassment on web 2.0. Proceedings of the Content Analysis in the WEB, 2:1-7.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Classification pipeline followed to detect offensive language and hate speech in Arabic language 3.3.1. CNN This architecture is based on the one presented by"
}
}
}
} |