File size: 65,560 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
{
    "paper_id": "2020",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T08:06:24.738709Z"
    },
    "title": "Multi-Task Learning using AraBert for Offensive Language Detection",
    "authors": [
        {
            "first": "Marc",
            "middle": [],
            "last": "Djandji",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "American University of Beirut",
                "location": {}
            },
            "email": ""
        },
        {
            "first": "Fady",
            "middle": [],
            "last": "Baly",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "American University of Beirut",
                "location": {}
            },
            "email": ""
        },
        {
            "first": "Wissam",
            "middle": [],
            "last": "Antoun",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "American University of Beirut",
                "location": {}
            },
            "email": ""
        },
        {
            "first": "Hazem",
            "middle": [],
            "last": "Hajj",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "American University of Beirut",
                "location": {}
            },
            "email": ""
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "The use of social media platforms has become more prevalent, which has provided tremendous opportunities for people to connect but has also opened the door for misuse with the spread of hate speech and offensive language. This phenomenon has been driving more and more people to more extreme reactions and online aggression, sometimes causing physical harm to individuals or groups of people. There is a need to control and prevent such misuse of online social media through automatic detection of profane language. The shared task on Offensive Language Detection at the OSACT4 has aimed at achieving state of art profane language detection methods for Arabic social media. Our team \"BERTologists\" tackled this problem by leveraging state of the art pretrained Arabic language model, AraBERT, that we augment with the addition of Multi-task learning to enable our model to learn efficiently from little data. Our Multitask AraBERT approach achieved the second place in both subtasks A & B, which shows that the model performs consistently across different tasks.",
    "pdf_parse": {
        "paper_id": "2020",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "The use of social media platforms has become more prevalent, which has provided tremendous opportunities for people to connect but has also opened the door for misuse with the spread of hate speech and offensive language. This phenomenon has been driving more and more people to more extreme reactions and online aggression, sometimes causing physical harm to individuals or groups of people. There is a need to control and prevent such misuse of online social media through automatic detection of profane language. The shared task on Offensive Language Detection at the OSACT4 has aimed at achieving state of art profane language detection methods for Arabic social media. Our team \"BERTologists\" tackled this problem by leveraging state of the art pretrained Arabic language model, AraBERT, that we augment with the addition of Multi-task learning to enable our model to learn efficiently from little data. Our Multitask AraBERT approach achieved the second place in both subtasks A & B, which shows that the model performs consistently across different tasks.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Offensive language, including hate speech, is a violent behavior that is becoming more and more pervasive across public social media platforms (Fosler-Lussier et al., 2012) . Hate speech was found to negatively impact the psychological well-being of individuals and to deteriorate inter-group relations on the societal level (Tynes et al., 2008) . As such, detection and prevention mechanisms should be setup to deal with such content. Machine learning algorithms can be employed to automatically detect these behaviors by relying on recent techniques in natural language processing that have shown propitious performance. A small number of works targeted the problem of simultaneously detecting both hate and offensive speech in Arabic. For example, Haddad et al. (2019) targeted the problem of hate and offensive speech detection for the Tunisian dialect using Support Vector Machine (SVM) and Naive Bayes classifiers trained on hand crafted features. Mulki et al. (2019) targeted the detection of profane language for the Levantine dialect using SVM and NB models trained on hand-crafted features. Although these works provided insights into the features that could be used for Arabic hate and offensive speech detection and introduced datasets for these specific dialects, they are limited to these specific dialects and do not target the problem of developing models that can learn efficiently with little data. In the 4th Workshop on Open-Source Arabic Corpora and Processing Tools (OSACT4) (Mubarak et al., 2020) the shared task on offensive language aimed at offensive and hate speech detection in Arabic tweets. The task is split up into two Subtasks: Subtask A) which aimed at detecting whether a tweet is offensive or not and Subtask B) which aimed at detecting whether a tweet is hate-speech or not. The organizers labeled a tweet as offensive if it contained explicit or implicit insults directed towards other people or inappropriate language. While a tweet labeled as hate speech contains targeted insults towards a group based on their nationality, ethnicity, gender, political or sport affiliation. Each subtask is evaluated independently with a macro-F1 score. The dataset had the following issues that also needed to be addressed: (i) The labeled tweets were written in dialectal Arabic which had inconsistent writing style and vocabulary (ii) The class labels were highly imbalanced especially in the hate speech case where only 5% of the data was labeled as hate speech. The models that we experimented with are all based on fine-tuning the Arabic Bidirectional Encoder Representation from Transformer (AraBERT) model (AUBMind-Lab, 2020) with different training classification schemes. To enable the model to learn from little data and not overfit to the dominant class, we train AraBERT in a multitask paradigm. Our contributions can be summarized as follows:",
                "cite_spans": [
                    {
                        "start": 143,
                        "end": 172,
                        "text": "(Fosler-Lussier et al., 2012)",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 325,
                        "end": 345,
                        "text": "(Tynes et al., 2008)",
                        "ref_id": "BIBREF21"
                    },
                    {
                        "start": 751,
                        "end": 771,
                        "text": "Haddad et al. (2019)",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 954,
                        "end": 973,
                        "text": "Mulki et al. (2019)",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 1497,
                        "end": 1519,
                        "text": "(Mubarak et al., 2020)",
                        "ref_id": "BIBREF16"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1."
            },
            {
                "text": "\u2022 Comprehensive evaluation including the impact of different sampling techniques and weighted loss functions that penalizes wrong predictions on the minority class in an attempt to balance the data.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1."
            },
            {
                "text": "\u2022 Propose a new model that combines AraBert and multi-task learning to achieve accurate predictions and address data imbalance.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1."
            },
            {
                "text": "\u2022 Propose a model that provides consistent performance on both hate and offensive speech detection with the presence of different Arabic dialects. ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1."
            },
            {
                "text": "Hate Speech Detection An extensive overview of the different works on hate speech detection was done by (Al-Hassan and Al-Dossari, 2019), but very few works in the literature target the problem of Arabic hate speech detection. Albadi et al. (2018) introduced the first dataset containing 6.6K Arabic hate-speech tweets targeting religious groups. The authors compared a lexicon-based classifier, SVM classifier trained with character n-gram features, and a Deep Learning approach consisting of a GRU trained on AraVec embeddings (Soliman et al., 2017) . The GRU approach outperformed all other approaches with a 77% F1 score.",
                "cite_spans": [
                    {
                        "start": 227,
                        "end": 247,
                        "text": "Albadi et al. (2018)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 529,
                        "end": 551,
                        "text": "(Soliman et al., 2017)",
                        "ref_id": "BIBREF20"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Hate and Offensive Speech Detection",
                "sec_num": "2.2."
            },
            {
                "text": "Offensive Speech Detection For offensive speech detection in Arabic, different approaches can be found in the literature. Alakrot et al. (2018) , introduced a dataset for offensive speech in Arabic collected from 15K YouTube comments. For classifying the different comments, the data was preprocessed by removing stop words and diacritics, correcting misspelled words, then tokenization and stemming was performed in order to extract features that are used by a binary SVM classifier. Mohaouchane et al. 2019 ",
                "cite_spans": [
                    {
                        "start": 122,
                        "end": 143,
                        "text": "Alakrot et al. (2018)",
                        "ref_id": "BIBREF2"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Hate and Offensive Speech Detection",
                "sec_num": "2.2."
            },
            {
                "text": "We based our approaches on the recently released AraBERT model. AraBERT is a Bidirectional representation of a text sequence, pretrained on a large Arabic corpus that achieved state of the art performance on multiple Arabic NLP tasks. Our best model is based on augmenting AraBERT with Multitask Learning, which solves the data imbalance problem by leveraging information from multiple tasks simultaneously. We also compare our best model with other approaches that are used to solve class imbalance issues such as balanced batch sampling and Multilabel classification. ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Proposed Models",
                "sec_num": "3."
            },
            {
                "text": "Multitask Learning is a learning paradigm that endows the developed models with the human-like abilities of transferring the important learned information between related tasks in what is called inductive transfer of knowledge under the assumption that commonalities exist between the learned tasks. Furthermore, the main advantages of MTL are that it reduces the requirements for large amounts of labeled data, improves the performance of a task with fewer data by leveraging the shared information from the related tasks with more data, and enables the model to be robust to missing observations for some tasks (Caruana, 1997; Qiu et al., 2017) . Given that little data is available for both hate and offensive classes, we use an MTL approach to augment the initial AraBERT model such that it can learn both tasks simultaneously, which reduces the overfitting effect induced by the dominant not offensive and not hate examples. Our MTL-Arabert model consists of two components as can be seen in Figure 1 : a part that gets trained by all the tasks' data in order to extract a general feature representation for all the tasks and a task-specific part that gets trained only by the task-specific examples to capture the task-specific characteristics.",
                "cite_spans": [
                    {
                        "start": 613,
                        "end": 628,
                        "text": "(Caruana, 1997;",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 629,
                        "end": 646,
                        "text": "Qiu et al., 2017)",
                        "ref_id": "BIBREF18"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 997,
                        "end": 1005,
                        "text": "Figure 1",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Multitask Learning (MTL)",
                "sec_num": "3.1."
            },
            {
                "text": "1. Shared Part: Contains the pretrained AraBert model that gets tuned by the combined loss of both tasks in order to learn a shared set of information between both tasks 2. Task-specific layers: These consist of a task-specific dense layer that are dedicated to extracting the unique information per task.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Multitask Learning (MTL)",
                "sec_num": "3.1."
            },
            {
                "text": "Multilabel Classification Multilabel classification is the task of classifying a single instance with multiple labels.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Other Approaches",
                "sec_num": "3.2."
            },
            {
                "text": "We considered using this approach for two main reasons. Firstly, the subtasks are very coherent as they both try to solve problems that behaviorally fall under the same general idea, detecting violent behaviors. Secondly, considering that subtask B has very little hate speech labeled data and that all hate speech data is also labeled as offensive, we assumed that a multilabel classifier would help leverage and provide a better understanding of the hate speech instances as they are being trained simultaneously with the offensive instances. We also explored oversampling the Task B instances and made sure that each training batch included samples of hate speech data.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Other Approaches",
                "sec_num": "3.2."
            },
            {
                "text": "Weighted Cross-Entropy loss Cross-entropy loss is useful in classification tasks, since the loss increases as the predicted probability diverges from the actual label. The Weighted version, penalizes each class differently, according to the given weight. The weighted cross-entropy loss of a class i with weight W i is shown in 1, the weight vector is given in 2",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Other Approaches",
                "sec_num": "3.2."
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "L(x i ) = \u2212W i log exp(x i ) j exp(x j )",
                        "eq_num": "(1)"
                    }
                ],
                "section": "Other Approaches",
                "sec_num": "3.2."
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "W i = N o Samples N o Classes \u00d7 Count(i)",
                        "eq_num": "(2)"
                    }
                ],
                "section": "Other Approaches",
                "sec_num": "3.2."
            },
            {
                "text": "Balanced batch sampling We re-sample the dataset in such a way that we under-sample the majority class and over-sample the minority class at the same time. Which reduces information loss due to under-sampling, and minimizes overfitting due to over-sampling, since the over/under-sampling is done to a lesser extent compared to independently implementing over/under-sampling.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Other Approaches",
                "sec_num": "3.2."
            },
            {
                "text": "The dataset for both tasks is the same containing 10K tweets that were annotated for offensiveness with labels (OFF or NOT OFF) and hate speech with labels (HS or NOT HS). The data was split by the competition organizers into 70% training set, 10% development set, and 20% test set. Table 1 shows the data distribution among the different labels and splits. By examining Table 1 , it can be seen that the data is very imbalanced having only 5% of the examples labeled as hate speech and 20% of the examples labeled as offensive in the training dataset, which makes the tasks much harder and calls for methods that can learn efficiently from little data. ",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 283,
                        "end": 290,
                        "text": "Table 1",
                        "ref_id": "TABREF2"
                    },
                    {
                        "start": 371,
                        "end": 378,
                        "text": "Table 1",
                        "ref_id": "TABREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Data Description",
                "sec_num": "4.1."
            },
            {
                "text": "For preprocessing the data, we tokenized Arabic words with the Farasa Arabic segmenter (Abdelali et al., 2016) so that the input would be compatible with the AraBERT input. For example, \" -Almadrasa\" becomes \" -Al+ madras +T\". We also removed all mentions of the user tokens \"USER\", retweet mentions \"RT USER:\", URL tokens, the \"<LF>\" tokens, diacritics, and emojis. As for hashtags, we replaced the underscore within a hashtag \" \" with a white space to regain separate understandable tokens, and we pad the hashtag with a whitespace as well. For instance, \" \" turns into \" #\".",
                "cite_spans": [
                    {
                        "start": 87,
                        "end": 110,
                        "text": "(Abdelali et al., 2016)",
                        "ref_id": "BIBREF0"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Preprocessing",
                "sec_num": "4.2."
            },
            {
                "text": "We should also mention that these preprocessing steps are precisely applied to all the experiments conducted for both subtasks.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Preprocessing",
                "sec_num": "4.2."
            },
            {
                "text": "Both tasks were evaluated using the unweighted-average F1 of all classes, which is the macro-F1 score. Given the high imbalance in the dataset and that the macro-F1 score is penalized by the minority class, achieving a high macro-F1 score is challenging. Table 2 and 3 provide the results of our models on the development and test set, respectively. All three models were trained on the whole training set for five epochs with a batch size of 32 and a sequence length of 256 in a GPU-accelerated environment. The epoch-model that achieved the highest macro-F1 score on the dev-set is reported in Table 2 . We only show the results of our best MTL model on the test data in Table 3 as provided by the competition organizers. Our Multitask approach shows consistent performance on both the dev and test sets across both tasks. The results show that training both tasks jointly in a Multitask setting improves the model generalizability with the presence of little data for each task. The results for the hate speech task are not as good as the offensive language task due to the minimal number of hate speech training examples, which constitute 5% of the training data. Although when combined, balanced batch sampling and weighted loss achieved the second best results on task A. When used separately, both approaches performed worse than the baseline model. This might be due to the overfitting effect of oversampling the minority class. While examining the false predictions of our MTL model on the dev set, we noticed that the model was classifying tweets with a negative sentiment as offensive tweets. While it is intuitive for offensive tweets to have a negative sentiment by nature, our model did not capture the fact that not all tweets with negative sentiment are offensive. On another note, the use of words that are offensive in a nonoffensive context was found to confuse the model. For example, the words \" \" and \" \" in the following tweets (720, 828), respectively, were not used with an offensive intent and made the model classify both tweets as offensive. We also found that the model has learned that a tweet cannot be hate-speech unless it is offensive, which would be ideal in case the offensive prediction was perfect. However, in our case, this also made the model falsely predict three tweets as hate-speech after they were falsely predicted as offensive. Furthermore, tweets 785, 881 in the dev set were found to be mislabeled as hate speech, and the model was able to detect this error showing a good understanding of what characterizes hate speech in a tweet. Finally, we found our model to falsely predict tweets that mostly contain mockery, sarcasm, or quoting other offensive/hateful statements. Future work should explore the use of data augmentation techniques such as adversarial examples and learning from little data approaches such as meta-learning in order to enable state-of-the-art Natural Language Understanding (NLU) models such as AraBERT to be trained efficiently with little data.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 255,
                        "end": 262,
                        "text": "Table 2",
                        "ref_id": "TABREF3"
                    },
                    {
                        "start": 596,
                        "end": 603,
                        "text": "Table 2",
                        "ref_id": "TABREF3"
                    },
                    {
                        "start": 673,
                        "end": 680,
                        "text": "Table 3",
                        "ref_id": "TABREF4"
                    }
                ],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "4.3."
            },
            {
                "text": "The presence of hate speech and offensive language on Arabic social platforms is a major issue affecting the social lives of many individuals in the Arab world. The lack of annotated data and the presence of different dialects constitutes major challenges for automated Arabic offensive and hate speech detection systems. In this paper, we proposed the use of pre-trained Arabic BERT for accurate classification of the different tweets. We further augment the AraBERT model using Multitask Learning to enable the model to jointly learn both tasks efficiently with the presence of little labeled data per-task. Our results show the superiority of our proposed Multitask AraBERT model over single-task and Multilabel AraBERT. We explore different methods in order to cope with the presence of imbalanced training classes such as the use a weighted loss function and data re-sampling techniques, but found these methods to not introduce any improvements. Our method achieved the second place on both tasks in the OSACT4 competition.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "5."
            }
        ],
        "back_matter": [],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Farasa: A fast and furious segmenter for arabic",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Abdelali",
                        "suffix": ""
                    },
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Darwish",
                        "suffix": ""
                    },
                    {
                        "first": "N",
                        "middle": [],
                        "last": "Durrani",
                        "suffix": ""
                    },
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Mubarak",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations",
                "volume": "",
                "issue": "",
                "pages": "11--16",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Abdelali, A., Darwish, K., Durrani, N., and Mubarak, H. (2016). Farasa: A fast and furious segmenter for ara- bic. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 11-16.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Detection of hate speech in social networks: a survey on multilingual corpus",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Al-Hassan",
                        "suffix": ""
                    },
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Al-Dossari",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "6th International Conference on Computer Science and Information Technology",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Al-Hassan, A. and Al-Dossari, H. (2019). Detection of hate speech in social networks: a survey on multilin- gual corpus. In 6th International Conference on Com- puter Science and Information Technology.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Towards accurate detection of offensive language in online communication in arabic",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Alakrot",
                        "suffix": ""
                    },
                    {
                        "first": "L",
                        "middle": [],
                        "last": "Murray",
                        "suffix": ""
                    },
                    {
                        "first": "N",
                        "middle": [
                            "S"
                        ],
                        "last": "Nikolov",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "Procedia computer science",
                "volume": "142",
                "issue": "",
                "pages": "315--320",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Alakrot, A., Murray, L., and Nikolov, N. S. (2018). To- wards accurate detection of offensive language in online communication in arabic. Procedia computer science, 142:315-320.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Are they our brothers? analysis and detection of religious hate speech in the arabic twittersphere",
                "authors": [
                    {
                        "first": "N",
                        "middle": [],
                        "last": "Albadi",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Kurdi",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Mishra",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM)",
                "volume": "",
                "issue": "",
                "pages": "69--76",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Albadi, N., Kurdi, M., and Mishra, S. (2018). Are they our brothers? analysis and detection of religious hate speech in the arabic twittersphere. In 2018 IEEE/ACM Interna- tional Conference on Advances in Social Networks Anal- ysis and Mining (ASONAM), pages 69-76. IEEE. AUBMind-Lab. (2020). https://github.com/aub- mind/arabert.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Multilingual detection of hate speech against immigrants and women in twitter",
                "authors": [],
                "year": null,
                "venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
                "volume": "",
                "issue": "",
                "pages": "54--63",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Semeval-2019 task 5: Multilingual detection of hate speech against immigrants and women in twitter. In Pro- ceedings of the 13th International Workshop on Semantic Evaluation, pages 54-63.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Multitask learning. Machine learning",
                "authors": [
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Caruana",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "",
                "volume": "28",
                "issue": "",
                "pages": "41--75",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Caruana, R. (1997). Multitask learning. Machine learn- ing, 28(1):41-75.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Universal sentence encoder",
                "authors": [
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Cer",
                        "suffix": ""
                    },
                    {
                        "first": "Y",
                        "middle": [],
                        "last": "Yang",
                        "suffix": ""
                    },
                    {
                        "first": "S.-Y",
                        "middle": [],
                        "last": "Kong",
                        "suffix": ""
                    },
                    {
                        "first": "N",
                        "middle": [],
                        "last": "Hua",
                        "suffix": ""
                    },
                    {
                        "first": "N",
                        "middle": [],
                        "last": "Limtiaco",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [
                            "S"
                        ],
                        "last": "John",
                        "suffix": ""
                    },
                    {
                        "first": "N",
                        "middle": [],
                        "last": "Constant",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Guajardo-Cespedes",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Yuan",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Tar",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {
                    "arXiv": [
                        "arXiv:1803.11175"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Cer, D., Yang, Y., Kong, S.-y., Hua, N., Limtiaco, N., John, R. S., Constant, N., Guajardo-Cespedes, M., Yuan, S., Tar, C., et al. (2018). Universal sentence encoder. arXiv preprint arXiv:1803.11175.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Automated hate speech detection and the problem of offensive language",
                "authors": [
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Davidson",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Warmsley",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Macy",
                        "suffix": ""
                    },
                    {
                        "first": "I",
                        "middle": [],
                        "last": "Weber",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Eleventh international aaai conference on web and social media",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Davidson, T., Warmsley, D., Macy, M., and Weber, I. (2017). Automated hate speech detection and the prob- lem of offensive language. In Eleventh international aaai conference on web and social media.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Devlin",
                        "suffix": ""
                    },
                    {
                        "first": "M.-W",
                        "middle": [],
                        "last": "Chang",
                        "suffix": ""
                    },
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Lee",
                        "suffix": ""
                    },
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Toutanova",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {
                    "arXiv": [
                        "arXiv:1810.04805"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). Bert: Pre-training of deep bidirectional trans- formers for language understanding. arXiv preprint arXiv:1810.04805.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Proceedings of the 2012 conference of the north american chapter of the association for computational linguistics: Human language technologies",
                "authors": [
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Fosler-Lussier",
                        "suffix": ""
                    },
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Riloff",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Bangalore",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Fosler-Lussier, E., Riloff, E., and Bangalore, S. (2012). Proceedings of the 2012 conference of the north ameri- can chapter of the association for computational linguis- tics: Human language technologies. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "T-hsab: A tunisian hate speech and abusive dataset",
                "authors": [
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Haddad",
                        "suffix": ""
                    },
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Mulki",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Oueslati",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "International Conference on Arabic Language Processing",
                "volume": "",
                "issue": "",
                "pages": "251--263",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Haddad, H., Mulki, H., and Oueslati, A. (2019). T-hsab: A tunisian hate speech and abusive dataset. In Inter- national Conference on Arabic Language Processing, pages 251-263. Springer.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Overview of the hasoc track at fire 2019: Hate speech and offensive content identification in indo-european languages",
                "authors": [
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Mandl",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Modha",
                        "suffix": ""
                    },
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Majumder",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Patel",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Dave",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Mandlia",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Patel",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "Proceedings of the 11th Forum for Information Retrieval Evaluation",
                "volume": "",
                "issue": "",
                "pages": "14--17",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Mandl, T., Modha, S., Majumder, P., Patel, D., Dave, M., Mandlia, C., and Patel, A. (2019). Overview of the hasoc track at fire 2019: Hate speech and offensive con- tent identification in indo-european languages. In Pro- ceedings of the 11th Forum for Information Retrieval Evaluation, pages 14-17.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Detecting offensive language on arabic social media using deep learning",
                "authors": [],
                "year": 2019,
                "venue": "Sixth International Conference on Social Networks Analysis, Management and Security (SNAMS)",
                "volume": "",
                "issue": "",
                "pages": "466--471",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Detecting offensive language on arabic social media us- ing deep learning. In 2019 Sixth International Confer- ence on Social Networks Analysis, Management and Se- curity (SNAMS), pages 466-471. IEEE.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Arabic offensive language classification on twitter",
                "authors": [
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Mubarak",
                        "suffix": ""
                    },
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Darwish",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "International Conference on Social Informatics",
                "volume": "",
                "issue": "",
                "pages": "269--276",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Mubarak, H. and Darwish, K. (2019). Arabic offensive language classification on twitter. In International Con- ference on Social Informatics, pages 269-276. Springer.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Overview of osact4 arabic offensive language detection shared task",
                "authors": [
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Mubarak",
                        "suffix": ""
                    },
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Darwish",
                        "suffix": ""
                    },
                    {
                        "first": "W",
                        "middle": [],
                        "last": "Magdy",
                        "suffix": ""
                    },
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Elsayed",
                        "suffix": ""
                    },
                    {
                        "first": "Al-Khalifa",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    },
                    {
                        "first": "H",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "",
                "volume": "4",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Mubarak, H., Darwish, K., Magdy, W., Elsayed, T., and Al- Khalifa, H. (2020). Overview of osact4 arabic offensive language detection shared task. 4.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "L-hsab: A levantine twitter dataset for hate speech and abusive language",
                "authors": [
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Mulki",
                        "suffix": ""
                    },
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Haddad",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [
                            "B"
                        ],
                        "last": "Ali",
                        "suffix": ""
                    },
                    {
                        "first": "Alshabani",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    },
                    {
                        "first": "H",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "Proceedings of the Third Workshop on Abusive Language Online",
                "volume": "",
                "issue": "",
                "pages": "111--118",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Mulki, H., Haddad, H., Ali, C. B., and Alshabani, H. (2019). L-hsab: A levantine twitter dataset for hate speech and abusive language. In Proceedings of the Third Workshop on Abusive Language Online, pages 111-118.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "A short-term rainfall prediction model using multi-task convolutional neural networks",
                "authors": [
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Qiu",
                        "suffix": ""
                    },
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Zhao",
                        "suffix": ""
                    },
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Zhang",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Huang",
                        "suffix": ""
                    },
                    {
                        "first": "X",
                        "middle": [],
                        "last": "Shi",
                        "suffix": ""
                    },
                    {
                        "first": "X",
                        "middle": [],
                        "last": "Wang",
                        "suffix": ""
                    },
                    {
                        "first": "Chu",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    },
                    {
                        "first": "W",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "2017 IEEE International Conference on Data Mining (ICDM)",
                "volume": "",
                "issue": "",
                "pages": "395--404",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Qiu, M., Zhao, P., Zhang, K., Huang, J., Shi, X., Wang, X., and Chu, W. (2017). A short-term rainfall prediction model using multi-task convolutional neural networks. In 2017 IEEE International Conference on Data Mining (ICDM), pages 395-404. IEEE.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "A survey on hate speech detection using natural language processing",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Schmidt",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Wiegand",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media",
                "volume": "",
                "issue": "",
                "pages": "1--10",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Schmidt, A. and Wiegand, M. (2017). A survey on hate speech detection using natural language processing. In Proceedings of the Fifth International Workshop on Nat- ural Language Processing for Social Media, pages 1-10.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "Aravec: A set of arabic word embedding models for use in arabic nlp",
                "authors": [
                    {
                        "first": "A",
                        "middle": [
                            "B"
                        ],
                        "last": "Soliman",
                        "suffix": ""
                    },
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Eissa",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [
                            "R"
                        ],
                        "last": "El-Beltagy",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Procedia Computer Science",
                "volume": "117",
                "issue": "",
                "pages": "256--265",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Soliman, A. B., Eissa, K., and El-Beltagy, S. R. (2017). Aravec: A set of arabic word embedding models for use in arabic nlp. Procedia Computer Science, 117:256-265.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "Online racial discrimination and psychological adjustment among adolescents",
                "authors": [
                    {
                        "first": "B",
                        "middle": [
                            "M"
                        ],
                        "last": "Tynes",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [
                            "T"
                        ],
                        "last": "Giang",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [
                            "R"
                        ],
                        "last": "Williams",
                        "suffix": ""
                    },
                    {
                        "first": "G",
                        "middle": [
                            "N"
                        ],
                        "last": "Thompson",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Journal of adolescent health",
                "volume": "43",
                "issue": "6",
                "pages": "565--569",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Tynes, B. M., Giang, M. T., Williams, D. R., and Thomp- son, G. N. (2008). Online racial discrimination and psy- chological adjustment among adolescents. Journal of adolescent health, 43(6):565-569.",
                "links": null
            },
            "BIBREF22": {
                "ref_id": "b22",
                "title": "Hateful symbols or hateful people? predictive features for hate speech detection on twitter",
                "authors": [
                    {
                        "first": "Z",
                        "middle": [],
                        "last": "Waseem",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Hovy",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "Proceedings of the NAACL student research workshop",
                "volume": "",
                "issue": "",
                "pages": "88--93",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Waseem, Z. and Hovy, D. (2016). Hateful symbols or hate- ful people? predictive features for hate speech detec- tion on twitter. In Proceedings of the NAACL student research workshop, pages 88-93.",
                "links": null
            },
            "BIBREF23": {
                "ref_id": "b23",
                "title": "Semeval-2019 task 6: Identifying and categorizing offensive language in social media (offenseval)",
                "authors": [
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Zampieri",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Malmasi",
                        "suffix": ""
                    },
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Nakov",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Rosenthal",
                        "suffix": ""
                    },
                    {
                        "first": "N",
                        "middle": [],
                        "last": "Farra",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Kumar",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {
                    "arXiv": [
                        "arXiv:1903.08983"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Zampieri, M., Malmasi, S., Nakov, P., Rosenthal, S., Farra, N., and Kumar, R. (2019). Semeval-2019 task 6: Identi- fying and categorizing offensive language in social me- dia (offenseval). arXiv preprint arXiv:1903.08983.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "num": null,
                "type_str": "figure",
                "uris": null,
                "text": "The trained Multitask Learning model given an input offensive tweet"
            },
            "TABREF1": {
                "num": null,
                "type_str": "table",
                "html": null,
                "text": "The most popular classifiers in the literature are SVM, NB, LSTM, CNN, GRU. The best performing systems employed a CNN model and AraVec embeddings for offensive speech detection and a GRU model on AraVec embeddings for hatespeech detection. Very little work can be found in the literature for Arabic hate and offensive speech detection. The current work does not address the multiple dialects and little data challenges for these tasks.",
                "content": "<table><tr><td>, ex-</td></tr><tr><td>plored the use of different Deep Learning architectures for</td></tr><tr><td>offensive language detection. AraVec embeddings of each</td></tr><tr><td>comment were used to train several models: CNN-LSTM,</td></tr><tr><td>CNN-BiLSTM with attention, Bi-LSTM, and CNN model</td></tr><tr><td>on the dataset proposed in (Alakrot et al., 2018) where the</td></tr><tr><td>CNN model was found to provide the best F1 score. In</td></tr><tr><td>Mubarak and Darwish (2019) 36 million tweets were col-</td></tr><tr><td>lected and used it to train a FastText deep learning model</td></tr><tr><td>and SVM classifier on character n-gram features where it</td></tr><tr><td>was found that the Arabic FastText DL model provided the</td></tr><tr><td>best results.</td></tr></table>"
            },
            "TABREF2": {
                "num": null,
                "type_str": "table",
                "html": null,
                "text": "The data distribution for both tasks. The first two rows show the class distribution of task A. The second two rows show the class distribution of task B",
                "content": "<table><tr><td>Class</td><td colspan=\"2\">Training Developement</td></tr><tr><td>NOT OFF</td><td>5468</td><td>821</td></tr><tr><td>OFF</td><td>1371</td><td>179</td></tr><tr><td>NOT HS</td><td>6489</td><td>956</td></tr><tr><td>HS</td><td>350</td><td>44</td></tr></table>"
            },
            "TABREF3": {
                "num": null,
                "type_str": "table",
                "html": null,
                "text": "The performance of the different approaches on the development set for both tasks using the Macro-F1 score metric. It can be seen that the Multitask approach outperforms all other approaches",
                "content": "<table><tr><td>Model</td><td>Macro-F1 Offensive Language</td><td>Hate Speech</td></tr><tr><td>AraBERT</td><td>89.56</td><td>80.60</td></tr><tr><td>AraBERT-S*</td><td>87.24</td><td>79.42</td></tr><tr><td>AraBERT-W**</td><td>88.17</td><td>79.85</td></tr><tr><td>AraBERT-SW***</td><td>90.02</td><td>78.13</td></tr><tr><td>Multilable AraBERT</td><td>89.41</td><td>79.83</td></tr><tr><td>Multilable AraBERT*</td><td>89.55</td><td>80.81</td></tr><tr><td>Multitask AraBERT</td><td>90.15</td><td>83.41</td></tr><tr><td colspan=\"2\">* AraBERT with balanced batch sampling</td><td/></tr><tr><td colspan=\"2\">** AraBERT with weighted loss</td><td/></tr><tr><td colspan=\"3\">*** AraBERT with both balanced batch sampling and weighted loss</td></tr></table>"
            },
            "TABREF4": {
                "num": null,
                "type_str": "table",
                "html": null,
                "text": "The performance of the Multitask Learning (MTL) model on the test set for both tasks using the Macro-F1 score metric.",
                "content": "<table><tr><td>Model</td><td colspan=\"2\">Task A: Macro-F1 Task B: Macro-F1</td></tr><tr><td>Multitask AraBERT</td><td>90</td><td>82.28</td></tr></table>"
            }
        }
    }
}