{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:06:18.737580Z" }, "title": "From Arabic Sentiment Analysis to Sarcasm Detection: The ArSarcasm Dataset", "authors": [ { "first": "Ibrahim", "middle": [], "last": "Abu-Farha", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Edinburgh Edinburgh", "location": { "country": "United Kingdom" } }, "email": "i.abufarha@ed.ac.uk" }, { "first": "Walid", "middle": [], "last": "Magdy", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Edinburgh Edinburgh", "location": { "country": "United Kingdom" } }, "email": "wmagdy@inf.ed.ac.uk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Sarcasm is one of the main challenges for sentiment analysis systems. Its complexity comes from the expression of opinion using implicit indirect phrasing. In this paper, we present ArSarcasm, an Arabic sarcasm detection dataset, which was created through the reannotation of available Arabic sentiment analysis datasets. The dataset contains 10,547 tweets, 16% of which are sarcastic. In addition to sarcasm the data was annotated for sentiment and dialects. Our analysis shows the highly subjective nature of these tasks, which is demonstrated by the shift in sentiment labels based on annotators' biases. Experiments show the degradation of state-of-the-art sentiment analysers when faced with sarcastic content. Finally, we train a deep learning model for sarcasm detection using BiLSTM. The model achieves an F1-score of 0.46, which shows the challenging nature of the task, and should act as a basic baseline for future research on our dataset.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Sarcasm is one of the main challenges for sentiment analysis systems. Its complexity comes from the expression of opinion using implicit indirect phrasing. In this paper, we present ArSarcasm, an Arabic sarcasm detection dataset, which was created through the reannotation of available Arabic sentiment analysis datasets. The dataset contains 10,547 tweets, 16% of which are sarcastic. In addition to sarcasm the data was annotated for sentiment and dialects. Our analysis shows the highly subjective nature of these tasks, which is demonstrated by the shift in sentiment labels based on annotators' biases. Experiments show the degradation of state-of-the-art sentiment analysers when faced with sarcastic content. Finally, we train a deep learning model for sarcasm detection using BiLSTM. The model achieves an F1-score of 0.46, which shows the challenging nature of the task, and should act as a basic baseline for future research on our dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Work on subjective language analysis, has been prominent in the literature during the last two decades. A major theme that dominated the area is the work on sentiment analysis (SA). According to (Liu, 2012) , SA is a process where we extract and analyse the emotional polarity in a given piece of text. Large amount of work focused on classifying the text into its sentiment class, which varies based on the granularity. SA is one of the research areas within the larger natural language processing (NLP) field. The interest in SA research was embarked by the advent of user-driven platforms such as social media websites. Research on SA started with the early work of (Pang et al., 2002) , where they analysed the sentiment in movie reviews. Since then, the work has developed and spanned different topics and fields such as social media analysis, computational social science and others. Most of the work is focused on English, whereas Arabic did not receive much attention until after 2010. The work on Arabic SA was kicked off by (Abdul-Mageed et al., 2011) , but it still lacks behind the progress in English. This can be attributed to the many challenges of Arabic language; including the large variety in dialects (Habash, 2010; Darwish et al., 2014) and the complex morphology of the language (Abdul-Mageed et al., 2011) . As the work on SA systems developed, researchers started analysing the intricacies of such systems in order understand their performance and where they fail. There are many challenges when doing SA, such as negation handling, domain dependence, lack of world knowledge and sarcasm (Hussein, 2018) . Sarcasm can be defined as a form of verbal irony that is intended to express contempt or ridicule (Joshi et al., 2017) . Sarcasm is correlated with expressing the opinion in an indirect way, where the intended meaning is different from the literal one (Wilson, 2006) . Additionally, sarcasm is highly context-dependent, as it al-ways takes part between parties where shared knowledge exist. Usually, a speaker will not use sarcasm unless he/she thinks that it will be understood as so (Joshi et al., 2017) . Sarcasm detection is a crucial task for SA. The reason for this is that a sarcastic utterance usually carries a negative implicit sentiment, while it is expressed using positive expressions. This contradiction between the surface sentiment and the intended one creates a complex challenge for SA systems (Bouazizi and Ohtsuki, 2016) . There has been lots of work on English sarcasm detection, those include datasets such as the works of (Abercrombie and Hovy, 2016; Barbieri et al., 2014a; Barbieri et al., 2014b; Filatova, 2012; Ghosh et al., 2015; Joshi et al., 2016) and detection systems such as (Rajadesingan et al., 2015; Joshi et al., 2015; Amir et al., 2016) . Work on Arabic sarcasm is yet to follow. Up to our knowledge, work on Arabic sarcasm is limited to the work of (Karoui et al., 2017) , a shared task on irony detection (Ghanem et al., 2019) along with the participants' submissions and a dialectal sarcasm dataset by (Abbes et al., 2020) . Currently, there is no publicly available dataset for Arabic sarcasm detection. The data in (Karoui et al., 2017) is not publicly available and most of the tweets provided in (Ghanem et al., 2019) were deleted. In this paper, we present ArSarcasm dataset, a new Arabic sarcasm detection dataset. The dataset was created using previously available Arabic SA datasets and adds sarcasm and dialect labels to them. The dataset contains 10,547 tweets, 1,682 (16%) of which are sarcastic. In addition, we analyse annotators' subjectivity regarding sentiment annotation, hoping to promote finding better procedures for collecting and annotating new datasets. The analysis shows that annotators' biases could be reflected on the annotation. Moreover, we provide an analysis of the performance of SA systems on sarcastic content. Finally, our BiLSTM based model , which serves as a baseline for this dataset, achieves an F1-score of 0.46 on the sarcastic class, which indicates that sarcasm detection is a challenging task. ArSarcasm is publicly available for research purposes, and it can be downloaded for free 1 .", "cite_spans": [ { "start": 195, "end": 206, "text": "(Liu, 2012)", "ref_id": "BIBREF31" }, { "start": 669, "end": 688, "text": "(Pang et al., 2002)", "ref_id": "BIBREF37" }, { "start": 1034, "end": 1061, "text": "(Abdul-Mageed et al., 2011)", "ref_id": "BIBREF2" }, { "start": 1221, "end": 1235, "text": "(Habash, 2010;", "ref_id": "BIBREF21" }, { "start": 1236, "end": 1257, "text": "Darwish et al., 2014)", "ref_id": "BIBREF12" }, { "start": 1301, "end": 1328, "text": "(Abdul-Mageed et al., 2011)", "ref_id": "BIBREF2" }, { "start": 1612, "end": 1627, "text": "(Hussein, 2018)", "ref_id": "BIBREF22" }, { "start": 1728, "end": 1748, "text": "(Joshi et al., 2017)", "ref_id": "BIBREF26" }, { "start": 1882, "end": 1896, "text": "(Wilson, 2006)", "ref_id": "BIBREF44" }, { "start": 2115, "end": 2135, "text": "(Joshi et al., 2017)", "ref_id": "BIBREF26" }, { "start": 2442, "end": 2470, "text": "(Bouazizi and Ohtsuki, 2016)", "ref_id": "BIBREF11" }, { "start": 2575, "end": 2603, "text": "(Abercrombie and Hovy, 2016;", "ref_id": "BIBREF3" }, { "start": 2604, "end": 2627, "text": "Barbieri et al., 2014a;", "ref_id": "BIBREF9" }, { "start": 2628, "end": 2651, "text": "Barbieri et al., 2014b;", "ref_id": "BIBREF10" }, { "start": 2652, "end": 2667, "text": "Filatova, 2012;", "ref_id": "BIBREF16" }, { "start": 2668, "end": 2687, "text": "Ghosh et al., 2015;", "ref_id": "BIBREF18" }, { "start": 2688, "end": 2707, "text": "Joshi et al., 2016)", "ref_id": "BIBREF25" }, { "start": 2738, "end": 2765, "text": "(Rajadesingan et al., 2015;", "ref_id": "BIBREF39" }, { "start": 2766, "end": 2785, "text": "Joshi et al., 2015;", "ref_id": "BIBREF24" }, { "start": 2786, "end": 2804, "text": "Amir et al., 2016)", "ref_id": "BIBREF6" }, { "start": 2918, "end": 2939, "text": "(Karoui et al., 2017)", "ref_id": "BIBREF28" }, { "start": 2975, "end": 2996, "text": "(Ghanem et al., 2019)", "ref_id": "BIBREF17" }, { "start": 3073, "end": 3093, "text": "(Abbes et al., 2020)", "ref_id": "BIBREF1" }, { "start": 3188, "end": 3209, "text": "(Karoui et al., 2017)", "ref_id": "BIBREF28" }, { "start": 3271, "end": 3292, "text": "(Ghanem et al., 2019)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The literature has a large amount of work on sarcasm and irony detection, which vary from collecting datasets to building detection systems. However, researchers and linguists cannot yet agree on a specific definition of what is considered to be sarcasm. According to (Grice et al., 1975) sarcasm is a form of figurative language where the literal meaning of words is not intended, and the opposite interpretation of the utterance is the intended one. Gibbs Jr et al. (1994) define sarcasm as a bitter and caustic from of irony. According to Merriam Webster's dictionary 2 , sarcasm is \"a sharp and often satirical or ironic utterance designed to cut or give pain\", while irony is defined as \" the use of words to express something other than and especially the opposite of the literal meaning\". These definitions are quite close to each other, yet each of them gives a different definition of sarcasm. While most of the literature assumes that sarcasm is a form of irony, Justo et al. (2014) argues that it is not necessarily ironic. Thus, sarcasm is always confused with other forms of figurative language such as metaphor, irony, humour and satire. One of the early works on English sarcasm/irony detection is the work of (Davidov et al., 2010) , where the authors created a dataset from Twitter using specific hashtags such as #sarcasm and #not, which indicate sarcasm. This way of data collection is called distant supervision, where data is collected based on some specific content that it bears. Distant supervision is the most common approach to collect sarcasm data from Twitter, where the hashtag #sarcasm and others are used. Some other works that utilised distant supervision to create Twitter datasets include (Barbieri et al., 2014a; Bamman and Smith, 2015; Bouazizi and Ohtsuki, 2016; Pt\u00e1\u010dek et al., 2014) . Davidov et al. (2010) mention that the use of the #sarcasm hashtag is possible but not reliable, and they used it as a search anchor. In addition, such hashtags can be useful in the cases of subtle sarcasm which might not be easily understood. Khodak et al. (2018) proposed a dataset collected from Reddit. They used a similar distant supervision approach, but they relied on \"/s\" marker which indicates sarcasm. The other way to create a dataset is through manual labelling. This is done by collecting a large amount of data and asking annotators to manually label it. Works that relied on this approach include (Riloff et al., 2013; Van Hee et al., 2018) . According to (Oprea and Magdy, 2019a) , this approach of creating datasets captures only the sarcasm that the annotators could perceive and misses the intended sarcasm. Intended sarcasm is when the text is considered to be sarcastic by its author. In their work, they experimented with the benefits of the context in detecting perceived and intended sarcasm. In another work (Oprea and Magdy, 2019b) , the authors propose a new dataset that captures intended sarcasm. They collected their data using an online survey, where they asked the participants to provide sarcastic and non-sarcastic tweets. They also asked them to provide an explanation for the sarcastic text and how would they convey the same idea in a direct way. The work on Arabic sarcasm is scarce and limited to few attempts. It is also worth mentioning that researchers on Arabic inherited the aforementioned confusion about sarcasm definition. The earliest work on Arabic sarcasm/irony is (Karoui et al., 2017) , where the authors created a corpus of Arabic tweets, which they collected using a set of political keywords. They filtered sarcastic content using distant supervision, where they used the Arabic equivalent of #sarcasm such as # , # , # and # . The result was a set of 5,479 tweets distributed as follows: 1,733 ironic tweets and 3,746 non-ironic. However, this corpus is not publicly available. Ghanem et al. (2019) organised a shared task competition for Arabic irony detection. They collected their data using distant supervision and used similar Arabic hashtags. In addition, they manually annotated a subset of tweets, which were sampled from ironic and non-ironic sets. The dataset provided in the shared task contained 5,030 tweets with almost 50% of them being ironic. It is worth mentioning that at the time of writing this paper around 1,300 tweets were still available. Finally, Abbes et al. 2020proposed a dialectal Arabic irony corpus, which was also collected from Twitter.", "cite_spans": [ { "start": 268, "end": 288, "text": "(Grice et al., 1975)", "ref_id": "BIBREF20" }, { "start": 452, "end": 474, "text": "Gibbs Jr et al. (1994)", "ref_id": "BIBREF19" }, { "start": 973, "end": 992, "text": "Justo et al. (2014)", "ref_id": "BIBREF27" }, { "start": 1225, "end": 1247, "text": "(Davidov et al., 2010)", "ref_id": "BIBREF13" }, { "start": 1723, "end": 1747, "text": "(Barbieri et al., 2014a;", "ref_id": "BIBREF9" }, { "start": 1748, "end": 1771, "text": "Bamman and Smith, 2015;", "ref_id": "BIBREF8" }, { "start": 1772, "end": 1799, "text": "Bouazizi and Ohtsuki, 2016;", "ref_id": "BIBREF11" }, { "start": 1800, "end": 1820, "text": "Pt\u00e1\u010dek et al., 2014)", "ref_id": "BIBREF38" }, { "start": 1823, "end": 1844, "text": "Davidov et al. (2010)", "ref_id": "BIBREF13" }, { "start": 2067, "end": 2087, "text": "Khodak et al. (2018)", "ref_id": "BIBREF29" }, { "start": 2436, "end": 2457, "text": "(Riloff et al., 2013;", "ref_id": "BIBREF41" }, { "start": 2458, "end": 2479, "text": "Van Hee et al., 2018)", "ref_id": "BIBREF43" }, { "start": 2495, "end": 2519, "text": "(Oprea and Magdy, 2019a)", "ref_id": "BIBREF35" }, { "start": 2857, "end": 2881, "text": "(Oprea and Magdy, 2019b)", "ref_id": "BIBREF36" }, { "start": 3439, "end": 3460, "text": "(Karoui et al., 2017)", "ref_id": "BIBREF28" }, { "start": 3858, "end": 3878, "text": "Ghanem et al. (2019)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Sarcasm and Irony Detection", "sec_num": "2.1" }, { "text": "In contrast to the recent attention coming to irony and sarcasm detection, Arabic SA has been under the researchers' radar for a while. There is a reasonable amount of Arabic SA resources that include corpora, lexicons and datasets. Early work on Arabic such as (Abdul-Mageed et al., 2011; Abbasi et al., 2008) , focused on modern standard Arabic (MSA). Later, attention started moving to dialects such as the work of (Mourad and Darwish, 2013) , where the authors introduced an expandable Arabic sentiment lexicon along with a corpus of tweets. El-Beltagy (2016) introduced a lexicon, which contains around 6000 sentiment terms that are taken from the Egyptian dialect and MSA. The Arabic Sentiment Tweets Dataset (ASTD) (Nabil et al., 2015) contains 10,006 tweets mainly in the Egyptian dialect. It is distributed over 4 classes: positive (799), negative (1,684), neutral (832) or objective (6,691). The tweets were collected over the period between 2013 and 2015, based on the most trending topics at that time. Elmadany et al. (2018) introduced ArSAS dataset, which is annotated for Arabic speech-act and sentiment analysis. The dataset consists of around 21K tweets, that cover multiple topics. The data was manually annotated using Crowd-Flower 3 crowd-sourcing platform. The annotation scheme for the sentiment analysis task was 4-way sentiment classification, as each of the tweets is labelled with one of the following: positive (4,543), negative (7,840), neutral (7,279), or mixed (1,302). Badaro et al. (2014) introduced Ar-SenL, an Arabic sentiment lexicon. The lexicon was built using different resources such as Arabic WordNet and English sentiment WordNet. In SemEval 2016, Arabic was included in the sentiment analysis task for multiple languages (Kiritchenko et al., 2016) , where they introduced a small dataset of 1,366 tweets. In 2017, Arabic was also a part of SemEval with a larger dataset of 9,455 Arabic tweets annotated with 3 labels: positive, negative or neutral (Rosenthal et al., 2017) . Other datasets and lexicons were proposed in the works of (Ibrahim et al., 2015; Refaee and Rieser, 2014; Aly and Atiya, 2013; Mahyoub et al., 2014) .", "cite_spans": [ { "start": 262, "end": 289, "text": "(Abdul-Mageed et al., 2011;", "ref_id": "BIBREF2" }, { "start": 290, "end": 310, "text": "Abbasi et al., 2008)", "ref_id": "BIBREF0" }, { "start": 418, "end": 444, "text": "(Mourad and Darwish, 2013)", "ref_id": "BIBREF33" }, { "start": 722, "end": 742, "text": "(Nabil et al., 2015)", "ref_id": "BIBREF34" }, { "start": 1015, "end": 1037, "text": "Elmadany et al. (2018)", "ref_id": "BIBREF15" }, { "start": 1500, "end": 1520, "text": "Badaro et al. (2014)", "ref_id": "BIBREF7" }, { "start": 1763, "end": 1789, "text": "(Kiritchenko et al., 2016)", "ref_id": "BIBREF30" }, { "start": 1990, "end": 2014, "text": "(Rosenthal et al., 2017)", "ref_id": "BIBREF42" }, { "start": 2075, "end": 2097, "text": "(Ibrahim et al., 2015;", "ref_id": "BIBREF23" }, { "start": 2098, "end": 2122, "text": "Refaee and Rieser, 2014;", "ref_id": "BIBREF40" }, { "start": 2123, "end": 2143, "text": "Aly and Atiya, 2013;", "ref_id": "BIBREF5" }, { "start": 2144, "end": 2165, "text": "Mahyoub et al., 2014)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Arabic Sentiment Analysis", "sec_num": "2.2" }, { "text": "In this work, we present ArSarcasm, a new dataset for Arabic sarcasm detection. The dataset consists of a combination of Arabic SA datasets, where we reannotated them for sarcasm. In addition to that, we also provide labelling for the dialect and sentiment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Dataset", "sec_num": "3" }, { "text": "In this work, we relied on a set of well-known Arabic SA datasets. The reason for this choice is that sarcasm is highly subjective and always mentioned as one of the main reasons that degrades sentiment analysers' performance. The datasets we are using are SemEval's 2017 (Rosenthal et al., 2017) and ASTD (Nabil et al., 2015) datasets. ASTD dataset consists of 10,006 tweets labelled as shown in Table 1 . The dataset contains tweets that date back to the period between 2013 and 2015. The tweets are mostly in Egyptian dialect and they were annotated using Amazon's Mechanical Turk. In our work, since we are aiming to annotate for sarcasm, we decided to eliminate the objective class and we took our sample from the other subjective classes. The other dataset we are using is the one provided in Se-mEval's 2017 task for Arabic SA (Rosenthal et al., 2017) . This dataset consists of 10,126 tweets distributed over different sets as shown in Table 2 . The data was annotated using CrowdFlower 4 crowd-sourcing platform. The new dataset contains 10,543 tweets, most of which were taken from SemEval's dataset. ", "cite_spans": [ { "start": 272, "end": 296, "text": "(Rosenthal et al., 2017)", "ref_id": "BIBREF42" }, { "start": 306, "end": 326, "text": "(Nabil et al., 2015)", "ref_id": "BIBREF34" }, { "start": 835, "end": 859, "text": "(Rosenthal et al., 2017)", "ref_id": "BIBREF42" } ], "ref_spans": [ { "start": 397, "end": 405, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 945, "end": 952, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Resources", "sec_num": "3.1" }, { "text": "For the annotation process, we used Figure-Eight 5 crowdsourcing platform. Our main objective was to annotate the data for sarcasm detection, but due to the challenges imposed by dialectal variations, we decided to add the annotation for dialects. We also include a new annotation for sentiment labels in order to have a glimpse of the variability and subjectivity between different annotators. Thus, the annotators were asked to provide three labels for each tweet as the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "3.2" }, { "text": "\u2022 Sarcasm: sarcastic or non-sarcastic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "3.2" }, { "text": "\u2022 Sentiment: positive, negative or neutral.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "3.2" }, { "text": "\u2022 Dialect: Egyptian, Gulf, Levantine, Maghrebi or Modern Standard Arabic (MSA).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "3.2" }, { "text": "To keep the sentiment annotation process consistent, we used the same guidelines that were used to annotate Se-mEval's dataset. Regarding sarcasm, we define it as an utterance that is used to express ridicule, where the intended meaning is different from the apparent one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "3.2" }, { "text": "Only annotators who have Arabic language in their profiles and come from an Arab country were allowed to participate. Each tweet was annotated by at least three different annotators. The quality of annotation was monitored using a set of 100 hidden test questions that appear randomly during the task, each of those question has the correct label for sentiment, sarcasm and dialect. If the performance of an annotator in these test questions dropped below 80%, this annotator is eliminated and all the labels he provided are also ignored. Agreement among annotators was 80.7% for sentiment, 89.3% for sarcasm and 86.7% for dialects.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "3.2" }, { "text": "The new dataset contains 10,547 tweets, 8,075 of them were taken from SemEval's dataset while the rest (2,472 tweets) were taken from ASTD. Each of the tweets has three labels for sarcasm, sentiment and dialect. Table 3 shows the statistics of the new dataset, where we can see that 16% of the data is sarcastic (1,682 tweets). The new annotation shows that most of the data is either in MSA or the Egyptian dialect, while there are few examples of the Maghrebi dialect. Figure 1 shows the ratio of sarcasm in the tweets belonging to each dialect. Maghrebi dialect has the largest percentage, but this is an outlier due to the small number of Maghrebi tweets (only 32 tweets). Thus, sarcasm is more prominent in the Egyptian dialect with 34% of the Egyptian tweets being sarcastic . Also, from the table, it is noticeable that the Egyptian dialect comprises most of the sarcastic tweets (799 tweets, 47.5% of the sarcastic tweets). Table 4 provides examples of sarcastic tweets from different dialects. Egyptian 1,584 799 1,179 733 471 2,383 Gulf 397 122 200 218 101 519 Levantine 433 118 239 178 134 551 Maghrebi 20 12 18 10 4 32 MSA 6,431 631 1,893 4,201 968 7,062 Total 8,865 1,682 3,529 5,340 1,678 10,547 Figure 1: Ratio of sarcasm over the dialects.", "cite_spans": [], "ref_spans": [ { "start": 212, "end": 219, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 471, "end": 479, "text": "Figure 1", "ref_id": null }, { "start": 932, "end": 939, "text": "Table 4", "ref_id": "TABREF6" }, { "start": 1003, "end": 1233, "text": "Egyptian 1,584 799 1,179 733 471 2,383 Gulf 397 122 200 218 101 519 Levantine 433 118 239 178 134 551 Maghrebi 20 12 18 10 4 32 MSA 6,431 631 1,893 4,201 968 7,062 Total 8,865 1,682 3,529 5,340", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Dataset Statistics", "sec_num": "4.1" }, { "text": "have negative sentiment, and this agrees with the definition we adopted, which implies that sarcasm includes making ridicule of someone or something. However, there are some neutral and positive sarcastic tweets, which could be due to the highly subjective nature of sarcasm. In addition, this could be attributed to the fact that some other metaphoric or figurative expressions might fall under the sarcasm definition. An example of that is understatement, where a person describes a good thing using negative terms such as \"This was an extremely hard exam\". This phenomenon is demonstrated in example 2 in Table 4 , where the speaker is bragging about his success in being a presenter, and he mentions that this had happened because his mother wished him to be embarrassed and looked at as a weird person. Table 4 provides examples of sarcastic tweets from different dialects along with their sentiment. Those examples show some aspects of the sarcasm nature, such as referencing real world items or figures. The examples show how challenging sarcasm can be, as some of them are expressed using positive expressions, yet having negative sentiment and vice versa. This, in turn, makes it extremely challenging for an SA system to analyse such examples, which urges the need for sarcasm detection systems. They also show that sarcasm relies heavily on world knowledge and context, thus incorporating such information is necessary to correctly identify sarcasm.", "cite_spans": [], "ref_spans": [ { "start": 608, "end": 615, "text": "Table 4", "ref_id": "TABREF6" }, { "start": 808, "end": 815, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Sentiment in Sarcasm", "sec_num": "4.2" }, { "text": "We also studied the difference between the original and new sentiment labels. in the labels. This is empirical proof of the highly subjective nature of sentiment analysis annotation. We can see that in the case of the positive class, more the 50% of the labels has been changed, Table 5 provides examples of these cases. From the table, it is noticeable that these cases can be attributed to different reasons. For example, in the second tweet, the original annotator failed to perceive the sarcasm intended by the author. This can be due to either a misunderstanding of the intentions, or a mismatch between the author's intention and the annotator's preference. The other reason that might have caused the labels to change is the different perspectives that a text can been looked at from. For example, some annotators might annotate news as neutral, considering the view of the news agency, while others might reflect their own preference. The same thing occurs if the text is about two conflicting parties, where the annotators are likely to take one side. In addition to that, the available Arabic SA datasets are highly political and they contain different dividing topics. Having all of these factors together would result in the high presence of the annotator's biases and personal views. Moreover, in the case of most sentiment and sarcasm datasets, they were annotated using crowd-sourcing platforms. These platforms provide multiple annotations for each data point, but they do not ensure having the same annotators to annotate all the data. This would provide inconsistent labels for the subjective text, where different conflicting biases are reflected on the assigned label. Thus, having multiple people annotating a dataset would probably give conflicting labels for different related instances within the data. These phenomena impose challenges for sentiment analysis systems, since the boundaries between the labels are not clear. Based on the previous statistics and examples, we can see that the current annotation schemes and procedures are not robust enough against bias, and they do not ensure the consistency among different annotators. In addition, the current approach of considering sarcasm as binary text classification problem is not precise. Sarcasm is highly related to the context, cultural background, world knowledge and personal traits of its author. We believe that more sophisticated data collection and annotation approaches should be used to have a proper computational representation of sarcasm.", "cite_spans": [], "ref_spans": [ { "start": 279, "end": 286, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Annotation Subjectivity", "sec_num": "4.3" }, { "text": "To better understand how sarcasm can be disruptive for SA systems, we conducted an experiment on the newly annotated data. This was done through comparing the performance of an available SA system on both sarcastic and non-sarcastic tweets. In this experiment, we used Mazajak (Abu Farha and Magdy, 2019) , state-of-the-art Arabic sentiment analyser. In order to have an informative comparison, we separated the dataset into two sets, sarcastic Negative Neutral 6 (deceitful weather, they say it will snow and it is warm)", "cite_spans": [ { "start": 282, "end": 304, "text": "Farha and Magdy, 2019)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Effect of Sarcasm on Sentiment Analysis", "sec_num": "5" }, { "text": "Negative Positive Table 5 : Examples of some tweet that have its labels changed.", "cite_spans": [], "ref_spans": [ { "start": 18, "end": 25, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Effect of Sarcasm on Sentiment Analysis", "sec_num": "5" }, { "text": "(1,682) and non-sarcastic (8,865). The performance was compared using the original and new sentiment labels. Table 6 shows the achieved macro F1-score. It is clear that there is a gap between the performance on sarcastic and non-sarcastic. Mazajak achieved F1-scores of 0.43 (new labels) and 0.44 (original labels) on sarcastic tweets, and F1-scores of 0.64 (new labels) and 0.61 (original labels) on the non-sarcastic ones. Although Mazajak was trained on samples from the same dataset, the results on the sarcastic tweets are much lower than those on the non-sarcastic ones. The low performance on the sarcastic tweets indicates that SA systems rely mostly on the surface sentiment expressed by the words. This, in turn, means that sarcasm, which is an indirect implicit expression tool, is a major challenge for SA systems. ", "cite_spans": [], "ref_spans": [ { "start": 109, "end": 116, "text": "Table 6", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Effect of Sarcasm on Sentiment Analysis", "sec_num": "5" }, { "text": "In this section, we conduct an experiment to set a baseline system for the new dataset. We tested a deep learning model, which consists of a bidirectional long shortterm memory (BiLSTM) followed by a fully connected layer. We used the hyper-paremeters shown in Table 7 . For text representation, we utilised the embeddings provided by (Abu Farha and Magdy, 2019 Table 7 : Hyper-parameters used for BiLSTM model.", "cite_spans": [ { "start": 340, "end": 361, "text": "Farha and Magdy, 2019", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 261, "end": 268, "text": "Table 7", "ref_id": null }, { "start": 362, "end": 369, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Sarcasm Detection Baseline System", "sec_num": "6" }, { "text": "The data was divided using an 80/20 split to create training and testing sets. Table 8 shows the results achieved by the model on the sarcastic class. As shown, the system detected sarcasm with precision 62%, but quite low recall of only 38%, which demonstrates that it is not straightforward to spot sarcasm. The overall F1-score is 0.46, which empirically proves that sarcasm detection is a challenging task that requires additional investigation. An example of that is the use of contextual information alongside the text itself, which proved to be effective in English sarcasm detection (Oprea and Magdy, 2019a) .", "cite_spans": [ { "start": 591, "end": 615, "text": "(Oprea and Magdy, 2019a)", "ref_id": "BIBREF35" } ], "ref_spans": [ { "start": 79, "end": 86, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Sarcasm Detection Baseline System", "sec_num": "6" }, { "text": "Metric Result Precision 0.62 Recall 0.38 F1-score 0.46 Table 8 : Baseline results on the sarcastic class.", "cite_spans": [], "ref_spans": [ { "start": 55, "end": 62, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Sarcasm Detection Baseline System", "sec_num": "6" }, { "text": "From the previous experiment, we conclude that sarcasm is a challenging task, and it relies heavily on the context, world knowledge and cultural background. Thus, having better performance or good detection systems relies heavily on how these aspects are incorporated into the training and preparation of these systems (Oprea and Magdy, 2019b) .", "cite_spans": [ { "start": 319, "end": 343, "text": "(Oprea and Magdy, 2019b)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Sarcasm Detection Baseline System", "sec_num": "6" }, { "text": "Sarcasm is an important aspect of any language. It includes expressing ideas, opinions and emotions in an indirect implicit way. This nature of implicitness makes sarcasm problematic for SA systems which mostly rely on the surface meaning/features. In this work, we presented ArSarcasm, a new Arabic sarcasm dataset. The dataset was created through the reannotation of available Arabic sentiment datasets. The new dataset contains sarcasm, sentiment and dialect labels. Analysis shows that sarcasm is highly prominent in sentiment datasets with 16% of them being sarcastic. We also show the high subjective nature of such datasets, which was demonstrated by the change in sentiment labels in the new annotation. The experiments show the gap between SA systems' performance on non-sarcastic tweets compared to sarcastic tweets, which urges the need to study such phenomena. Finally, our initial experiments on sarcasm detection show that it is a challenging task. We believe that this dataset is a starting point in the direction of full study of sarcasm and figurative language in Arabic. However, due to the highly subjective nature of sarcasm, its reliance on world knowledge, cultural background and the perspectives of the communication parties, we believe that the data collection procedure should incorporate more signals about these information. In the future, we hope to prepare a new dataset that incorporates more textual information. We also hope to study and analyse the differences and similarities among sarcastic expressions used by Arabic speakers in different countries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "7" }, { "text": "ArSarcasm is available at:https://github.com/iabufarha/ArSarcasm 2 https://www.merriam-webster.com", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Currently known asFigure-Eight", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.figure-eight.com/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported by the D&S Programme of The Alan Turing Institute under the EPSRC grant EP/N510129/1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Sentiment analysis in multiple languages: Feature selection for opinion classification in web forums", "authors": [ { "first": "A", "middle": [], "last": "Abbasi", "suffix": "" }, { "first": "H", "middle": [], "last": "Chen", "suffix": "" }, { "first": "A", "middle": [], "last": "Salem", "suffix": "" } ], "year": 2008, "venue": "ACM Transactions on Information Systems (TOIS)", "volume": "26", "issue": "3", "pages": "1--34", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abbasi, A., Chen, H., and Salem, A. (2008). Sentiment analysis in multiple languages: Feature selection for opinion classification in web forums. ACM Transactions on Information Systems (TOIS), 26(3):1-34.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Daict: A dialectal arabic irony corpus extracted from twitter", "authors": [ { "first": "I", "middle": [], "last": "Abbes", "suffix": "" }, { "first": "W", "middle": [], "last": "Zaghouani", "suffix": "" }, { "first": "O", "middle": [], "last": "El-Hardlo", "suffix": "" } ], "year": 2020, "venue": "", "volume": "2020", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abbes, I., Zaghouani, W., and El-Hardlo, O. (2020). Daict: A dialectal arabic irony corpus extracted from twitter. LREC 2020.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Subjectivity and sentiment analysis of modern standard arabic", "authors": [ { "first": "M", "middle": [], "last": "Abdul-Mageed", "suffix": "" }, { "first": "M", "middle": [ "T" ], "last": "Diab", "suffix": "" }, { "first": "M", "middle": [], "last": "Korayem", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers", "volume": "2", "issue": "", "pages": "587--591", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abdul-Mageed, M., Diab, M. T., and Korayem, M. (2011). Subjectivity and sentiment analysis of modern standard arabic. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers -Volume 2, HLT '11, pages 587-591, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Putting sarcasm detection into context: The effects of class imbalance and manual labelling on supervised machine classification of twitter conversations", "authors": [ { "first": "G", "middle": [], "last": "Abercrombie", "suffix": "" }, { "first": "D", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the ACL 2016 Student Research Workshop", "volume": "", "issue": "", "pages": "107--113", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abercrombie, G. and Hovy, D. (2016). Putting sarcasm detection into context: The effects of class imbalance and manual labelling on supervised machine classification of twitter conversations. In Proceedings of the ACL 2016 Student Research Workshop, pages 107-113.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Mazajak: An online Arabic sentiment analyser", "authors": [ { "first": "Abu", "middle": [], "last": "Farha", "suffix": "" }, { "first": "I", "middle": [], "last": "Magdy", "suffix": "" }, { "first": "W", "middle": [], "last": "", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fourth Arabic Natural Language Processing Workshop", "volume": "", "issue": "", "pages": "192--198", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abu Farha, I. and Magdy, W. (2019). Mazajak: An online Arabic sentiment analyser. In Proceedings of the Fourth Arabic Natural Language Processing Workshop, pages 192-198, Florence, Italy, August. Association for Com- putational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Labr: A large scale arabic book reviews dataset", "authors": [ { "first": "M", "middle": [], "last": "Aly", "suffix": "" }, { "first": "A", "middle": [], "last": "Atiya", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "494--498", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aly, M. and Atiya, A. (2013). Labr: A large scale arabic book reviews dataset. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguis- tics (Volume 2: Short Papers), pages 494-498.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Modelling context with user embeddings for sarcasm detection in social media", "authors": [ { "first": "S", "middle": [], "last": "Amir", "suffix": "" }, { "first": "B", "middle": [ "C" ], "last": "Wallace", "suffix": "" }, { "first": "H", "middle": [], "last": "Lyu", "suffix": "" }, { "first": "P", "middle": [], "last": "Carvalho", "suffix": "" }, { "first": "M", "middle": [ "J" ], "last": "Silva", "suffix": "" } ], "year": 2016, "venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "167--177", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amir, S., Wallace, B. C., Lyu, H., Carvalho, P., and Silva, M. J. (2016). Modelling context with user embeddings for sarcasm detection in social media. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 167-177, Berlin, Germany, August. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A large scale arabic sentiment lexicon for arabic opinion mining", "authors": [ { "first": "G", "middle": [], "last": "Badaro", "suffix": "" }, { "first": "R", "middle": [], "last": "Baly", "suffix": "" }, { "first": "H", "middle": [], "last": "Hajj", "suffix": "" }, { "first": "N", "middle": [], "last": "Habash", "suffix": "" }, { "first": "W", "middle": [], "last": "El-Hajj", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the EMNLP 2014 workshop on arabic natural language processing (ANLP)", "volume": "", "issue": "", "pages": "165--173", "other_ids": {}, "num": null, "urls": [], "raw_text": "Badaro, G., Baly, R., Hajj, H., Habash, N., and El-Hajj, W. (2014). A large scale arabic sentiment lexicon for arabic opinion mining. In Proceedings of the EMNLP 2014 workshop on arabic natural language processing (ANLP), pages 165-173.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Contextualized sarcasm detection on twitter", "authors": [ { "first": "D", "middle": [], "last": "Bamman", "suffix": "" }, { "first": "N", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2015, "venue": "Ninth International AAAI Conference on Web and Social Media", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bamman, D. and Smith, N. A. (2015). Contextualized sar- casm detection on twitter. In Ninth International AAAI Conference on Web and Social Media.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Italian irony detection in twitter: a first approach", "authors": [ { "first": "F", "middle": [], "last": "Barbieri", "suffix": "" }, { "first": "F", "middle": [], "last": "Ronzano", "suffix": "" }, { "first": "H", "middle": [], "last": "Saggion", "suffix": "" } ], "year": 2014, "venue": "The First Italian Conference on Computational Linguistics CLiCit", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barbieri, F., Ronzano, F., and Saggion, H. (2014a). Italian irony detection in twitter: a first approach. In The First Italian Conference on Computational Linguistics CLiC- it, page 28.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Modelling sarcasm in twitter, a novel approach", "authors": [ { "first": "F", "middle": [], "last": "Barbieri", "suffix": "" }, { "first": "H", "middle": [], "last": "Saggion", "suffix": "" }, { "first": "F", "middle": [], "last": "Ronzano", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 5th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis", "volume": "", "issue": "", "pages": "50--58", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barbieri, F., Saggion, H., and Ronzano, F. (2014b). Mod- elling sarcasm in twitter, a novel approach. In Proceed- ings of the 5th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 50-58.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A pattern-based approach for sarcasm detection on twitter", "authors": [ { "first": "M", "middle": [], "last": "Bouazizi", "suffix": "" }, { "first": "T", "middle": [ "O" ], "last": "Ohtsuki", "suffix": "" } ], "year": 2016, "venue": "IEEE Access", "volume": "4", "issue": "", "pages": "5477--5488", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bouazizi, M. and Ohtsuki, T. O. (2016). A pattern-based approach for sarcasm detection on twitter. IEEE Access, 4:5477-5488.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Arabic information retrieval", "authors": [ { "first": "K", "middle": [], "last": "Darwish", "suffix": "" }, { "first": "W", "middle": [], "last": "Magdy", "suffix": "" } ], "year": 2014, "venue": "Foundations and Trends R in Information Retrieval", "volume": "7", "issue": "4", "pages": "239--342", "other_ids": {}, "num": null, "urls": [], "raw_text": "Darwish, K., Magdy, W., et al. (2014). Arabic informa- tion retrieval. Foundations and Trends R in Information Retrieval, 7(4):239-342.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Semisupervised recognition of sarcastic sentences in twitter and amazon", "authors": [ { "first": "D", "middle": [], "last": "Davidov", "suffix": "" }, { "first": "O", "middle": [], "last": "Tsur", "suffix": "" }, { "first": "A", "middle": [], "last": "Rappoport", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the fourteenth conference on computational natural language learning", "volume": "", "issue": "", "pages": "107--116", "other_ids": {}, "num": null, "urls": [], "raw_text": "Davidov, D., Tsur, O., and Rappoport, A. (2010). Semi- supervised recognition of sarcastic sentences in twitter and amazon. In Proceedings of the fourteenth conference on computational natural language learning, pages 107- 116. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Nileulex: A phrase and word level sentiment lexicon for egyptian and modern standard arabic", "authors": [ { "first": "S", "middle": [ "R" ], "last": "El-Beltagy", "suffix": "" } ], "year": 2016, "venue": "LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "El-Beltagy, S. R. (2016). Nileulex: A phrase and word level sentiment lexicon for egyptian and modern standard arabic. In LREC.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Arsas: An arabic speech-act and sentiment corpus of tweets", "authors": [ { "first": "A", "middle": [ "A" ], "last": "Elmadany", "suffix": "" }, { "first": "H", "middle": [], "last": "Mubarak", "suffix": "" }, { "first": "W", "middle": [], "last": "Magdy", "suffix": "" } ], "year": 2018, "venue": "OSACT 3: The 3rd Workshop on Open-Source Arabic Corpora and Processing Tools", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elmadany, A. A., Mubarak, H., and Magdy, W. (2018). Arsas: An arabic speech-act and sentiment corpus of tweets. In OSACT 3: The 3rd Workshop on Open-Source Arabic Corpora and Processing Tools, page 20.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Irony and sarcasm: Corpus generation and analysis using crowdsourcing", "authors": [ { "first": "E", "middle": [], "last": "Filatova", "suffix": "" } ], "year": 2012, "venue": "Lrec", "volume": "", "issue": "", "pages": "392--398", "other_ids": {}, "num": null, "urls": [], "raw_text": "Filatova, E. (2012). Irony and sarcasm: Corpus generation and analysis using crowdsourcing. In Lrec, pages 392- 398. Citeseer.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Idat at fire2019: Overview of the track on irony detection in arabic tweets", "authors": [ { "first": "B", "middle": [], "last": "Ghanem", "suffix": "" }, { "first": "J", "middle": [], "last": "Karoui", "suffix": "" }, { "first": "F", "middle": [], "last": "Benamara", "suffix": "" }, { "first": "V", "middle": [], "last": "Moriceau", "suffix": "" }, { "first": "P", "middle": [], "last": "Rosso", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 11th Forum for Information Retrieval Evaluation", "volume": "", "issue": "", "pages": "10--13", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ghanem, B., Karoui, J., Benamara, F., Moriceau, V., and Rosso, P. (2019). Idat at fire2019: Overview of the track on irony detection in arabic tweets. In Proceedings of the 11th Forum for Information Retrieval Evaluation, pages 10-13.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Sarcastic or not: Word embeddings to predict the literal or sarcastic meaning of words", "authors": [ { "first": "D", "middle": [], "last": "Ghosh", "suffix": "" }, { "first": "W", "middle": [], "last": "Guo", "suffix": "" }, { "first": "S", "middle": [], "last": "Muresan", "suffix": "" } ], "year": 2015, "venue": "proceedings of the 2015 conference on empirical methods in natural language processing", "volume": "", "issue": "", "pages": "1003--1012", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ghosh, D., Guo, W., and Muresan, S. (2015). Sarcastic or not: Word embeddings to predict the literal or sarcastic meaning of words. In proceedings of the 2015 confer- ence on empirical methods in natural language process- ing, pages 1003-1012.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "The poetics of mind: Figurative thought, language, and understanding", "authors": [ { "first": "R", "middle": [ "W" ], "last": "Gibbs", "suffix": "" }, { "first": "R", "middle": [ "W" ], "last": "Gibbs", "suffix": "" }, { "first": "J", "middle": [], "last": "Gibbs", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gibbs Jr, R. W., Gibbs, R. W., and Gibbs, J. (1994). The poetics of mind: Figurative thought, language, and un- derstanding. Cambridge University Press.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Syntax and semantics", "authors": [ { "first": "H", "middle": [ "P" ], "last": "Grice", "suffix": "" }, { "first": "P", "middle": [], "last": "Cole", "suffix": "" }, { "first": "J", "middle": [ "L" ], "last": "Morgan", "suffix": "" } ], "year": 1975, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grice, H. P., Cole, P., and Morgan, J. L. (1975). Syntax and semantics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Introduction to arabic natural language processing", "authors": [ { "first": "N", "middle": [ "Y" ], "last": "Habash", "suffix": "" } ], "year": 2010, "venue": "Synthesis Lectures on Human Language Technologies", "volume": "3", "issue": "1", "pages": "1--187", "other_ids": {}, "num": null, "urls": [], "raw_text": "Habash, N. Y. (2010). Introduction to arabic natural lan- guage processing. Synthesis Lectures on Human Lan- guage Technologies, 3(1):1-187.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A survey on sentiment analysis challenges", "authors": [ { "first": "D", "middle": [ "M E" ], "last": "Hussein", "suffix": "" }, { "first": "-D", "middle": [ "M" ], "last": "", "suffix": "" } ], "year": 2018, "venue": "Journal of King Saud University -Engineering Sciences", "volume": "30", "issue": "4", "pages": "330--338", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hussein, D. M. E.-D. M. (2018). A survey on sentiment analysis challenges. Journal of King Saud University - Engineering Sciences, 30(4):330 -338.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Mika: A tagged corpus for modern standard arabic and colloquial sentiment analysis", "authors": [ { "first": "H", "middle": [ "S" ], "last": "Ibrahim", "suffix": "" }, { "first": "S", "middle": [ "M" ], "last": "Abdou", "suffix": "" }, { "first": "M", "middle": [], "last": "Gheith", "suffix": "" } ], "year": 2015, "venue": "2015 IEEE 2nd International Conference on Recent Trends in Information Systems (ReTIS)", "volume": "", "issue": "", "pages": "353--358", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ibrahim, H. S., Abdou, S. M., and Gheith, M. (2015). Mika: A tagged corpus for modern standard arabic and colloquial sentiment analysis. In 2015 IEEE 2nd Inter- national Conference on Recent Trends in Information Systems (ReTIS), pages 353-358. IEEE.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Harnessing context incongruity for sarcasm detection", "authors": [ { "first": "A", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "V", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "P", "middle": [], "last": "Bhattacharyya", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "2", "issue": "", "pages": "757--762", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joshi, A., Sharma, V., and Bhattacharyya, P. (2015). Har- nessing context incongruity for sarcasm detection. In Proceedings of the 53rd Annual Meeting of the Associ- ation for Computational Linguistics and the 7th Interna- tional Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 757-762.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Harnessing sequence labeling for sarcasm detection in dialogue from tv series 'friends", "authors": [ { "first": "A", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "V", "middle": [], "last": "Tripathi", "suffix": "" }, { "first": "P", "middle": [], "last": "Bhattacharyya", "suffix": "" }, { "first": "Carman", "middle": [], "last": "", "suffix": "" }, { "first": "M", "middle": [], "last": "", "suffix": "" } ], "year": 2016, "venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "146--155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joshi, A., Tripathi, V., Bhattacharyya, P., and Carman, M. (2016). Harnessing sequence labeling for sarcasm detec- tion in dialogue from tv series 'friends'. In Proceedings of The 20th SIGNLL Conference on Computational Nat- ural Language Learning, pages 146-155.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Automatic sarcasm detection: A survey", "authors": [ { "first": "A", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "P", "middle": [], "last": "Bhattacharyya", "suffix": "" }, { "first": "M", "middle": [ "J" ], "last": "Carman", "suffix": "" } ], "year": 2017, "venue": "ACM Computing Surveys (CSUR)", "volume": "50", "issue": "5", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joshi, A., Bhattacharyya, P., and Carman, M. J. (2017). Automatic sarcasm detection: A survey. ACM Comput- ing Surveys (CSUR), 50(5):73.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Extracting relevant knowledge for the detection of sarcasm and nastiness in the social web. Knowledge-Based Systems", "authors": [ { "first": "R", "middle": [], "last": "Justo", "suffix": "" }, { "first": "T", "middle": [], "last": "Corcoran", "suffix": "" }, { "first": "S", "middle": [ "M" ], "last": "Lukin", "suffix": "" }, { "first": "M", "middle": [], "last": "Walker", "suffix": "" }, { "first": "M", "middle": [ "I" ], "last": "Torres", "suffix": "" } ], "year": 2014, "venue": "", "volume": "69", "issue": "", "pages": "124--133", "other_ids": {}, "num": null, "urls": [], "raw_text": "Justo, R., Corcoran, T., Lukin, S. M., Walker, M., and Torres, M. I. (2014). Extracting relevant knowledge for the detection of sarcasm and nastiness in the social web. Knowledge-Based Systems, 69:124-133.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Soukhria: Towards an irony detection system for arabic in social media", "authors": [ { "first": "J", "middle": [], "last": "Karoui", "suffix": "" }, { "first": "F", "middle": [ "B" ], "last": "Zitoune", "suffix": "" }, { "first": "V", "middle": [], "last": "Moriceau", "suffix": "" } ], "year": 2017, "venue": "Procedia Computer Science", "volume": "117", "issue": "", "pages": "161--168", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karoui, J., Zitoune, F. B., and Moriceau, V. (2017). Soukhria: Towards an irony detection system for arabic in social media. Procedia Computer Science, 117:161- 168.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "A large self-annotated corpus for sarcasm", "authors": [ { "first": "M", "middle": [], "last": "Khodak", "suffix": "" }, { "first": "N", "middle": [], "last": "Saunshi", "suffix": "" }, { "first": "K", "middle": [], "last": "Vodrahalli", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Khodak, M., Saunshi, N., and Vodrahalli, K. (2018). A large self-annotated corpus for sarcasm. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan, May. European Language Resources Association (ELRA).", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Semeval-2016 task 7: Determining sentiment intensity of english and arabic phrases", "authors": [ { "first": "S", "middle": [], "last": "Kiritchenko", "suffix": "" }, { "first": "S", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "M", "middle": [], "last": "Salameh", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 10th international workshop on semantic evaluation (SEMEVAL-2016)", "volume": "", "issue": "", "pages": "42--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kiritchenko, S., Mohammad, S., and Salameh, M. (2016). Semeval-2016 task 7: Determining sentiment inten- sity of english and arabic phrases. In Proceedings of the 10th international workshop on semantic evaluation (SEMEVAL-2016), pages 42-51.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Sentiment Analysis and Opinion Mining", "authors": [ { "first": "B", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2012, "venue": "Synthesis Lectures on Human Language Technologies", "volume": "5", "issue": "1", "pages": "1--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liu, B. (2012). Sentiment Analysis and Opinion Mining. Synthesis Lectures on Human Language Technologies, 5(1):1-167.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Building an arabic sentiment lexicon using semisupervised learning", "authors": [ { "first": "F", "middle": [ "H" ], "last": "Mahyoub", "suffix": "" }, { "first": "M", "middle": [ "A" ], "last": "Siddiqui", "suffix": "" }, { "first": "M", "middle": [ "Y" ], "last": "Dahab", "suffix": "" } ], "year": 2014, "venue": "Journal of King Saud University -Computer and Information Sciences", "volume": "26", "issue": "4", "pages": "417--424", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mahyoub, F. H., Siddiqui, M. A., and Dahab, M. Y. (2014). Building an arabic sentiment lexicon using semi- supervised learning. Journal of King Saud University - Computer and Information Sciences, 26(4):417 -424. Special Issue on Arabic NLP.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Subjectivity and sentiment analysis of modern standard arabic and arabic microblogs", "authors": [ { "first": "A", "middle": [], "last": "Mourad", "suffix": "" }, { "first": "K", "middle": [], "last": "Darwish", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 4th workshop on computational approaches to subjectivity, sentiment and social media analysis", "volume": "", "issue": "", "pages": "55--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mourad, A. and Darwish, K. (2013). Subjectivity and sen- timent analysis of modern standard arabic and arabic mi- croblogs. In Proceedings of the 4th workshop on compu- tational approaches to subjectivity, sentiment and social media analysis, pages 55-64.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Astd: Arabic sentiment tweets dataset", "authors": [ { "first": "M", "middle": [], "last": "Nabil", "suffix": "" }, { "first": "M", "middle": [], "last": "Aly", "suffix": "" }, { "first": "A", "middle": [], "last": "Atiya", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2515--2519", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nabil, M., Aly, M., and Atiya, A. (2015). Astd: Arabic sentiment tweets dataset. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2515-2519.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Exploring author context for detecting intended vs perceived sarcasm", "authors": [ { "first": "S", "middle": [], "last": "Oprea", "suffix": "" }, { "first": "W", "middle": [], "last": "Magdy", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2854--2859", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oprea, S. and Magdy, W. (2019a). Exploring author con- text for detecting intended vs perceived sarcasm. In Pro- ceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2854-2859, Flo- rence, Italy, July. Association for Computational Lin- guistics.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "isarcasm: A dataset of intended sarcasm", "authors": [ { "first": "S", "middle": [], "last": "Oprea", "suffix": "" }, { "first": "W", "middle": [], "last": "Magdy", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1911.03123" ] }, "num": null, "urls": [], "raw_text": "Oprea, S. and Magdy, W. (2019b). isarcasm: A dataset of intended sarcasm. arXiv preprint arXiv:1911.03123.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Thumbs up?: sentiment classification using machine learning techniques", "authors": [ { "first": "B", "middle": [], "last": "Pang", "suffix": "" }, { "first": "L", "middle": [], "last": "Lee", "suffix": "" }, { "first": "S", "middle": [], "last": "Vaithyanathan", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the ACL-02 conference on Empirical methods in natural language processing", "volume": "10", "issue": "", "pages": "79--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pang, B., Lee, L., and Vaithyanathan, S. (2002). Thumbs up?: sentiment classification using machine learning techniques. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing- Volume 10, pages 79-86. Association for Computational Linguistics.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Sarcasm detection on czech and english twitter", "authors": [ { "first": "T", "middle": [], "last": "Pt\u00e1\u010dek", "suffix": "" }, { "first": "I", "middle": [], "last": "Habernal", "suffix": "" }, { "first": "J", "middle": [], "last": "Hong", "suffix": "" } ], "year": 2014, "venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "213--223", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pt\u00e1\u010dek, T., Habernal, I., and Hong, J. (2014). Sarcasm detection on czech and english twitter. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 213-223.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Sarcasm detection on twitter: A behavioral modeling approach", "authors": [ { "first": "A", "middle": [], "last": "Rajadesingan", "suffix": "" }, { "first": "R", "middle": [], "last": "Zafarani", "suffix": "" }, { "first": "H", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Eighth ACM International Conference on Web Search and Data Mining", "volume": "", "issue": "", "pages": "97--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rajadesingan, A., Zafarani, R., and Liu, H. (2015). Sar- casm detection on twitter: A behavioral modeling ap- proach. In Proceedings of the Eighth ACM International Conference on Web Search and Data Mining, pages 97- 106. ACM.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "An arabic twitter corpus for subjectivity and sentiment analysis", "authors": [ { "first": "E", "middle": [], "last": "Refaee", "suffix": "" }, { "first": "V", "middle": [], "last": "Rieser", "suffix": "" } ], "year": 2014, "venue": "LREC", "volume": "", "issue": "", "pages": "2268--2273", "other_ids": {}, "num": null, "urls": [], "raw_text": "Refaee, E. and Rieser, V. (2014). An arabic twitter corpus for subjectivity and sentiment analysis. In LREC, pages 2268-2273.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Sarcasm as contrast between a positive sentiment and negative situation", "authors": [ { "first": "E", "middle": [], "last": "Riloff", "suffix": "" }, { "first": "A", "middle": [], "last": "Qadir", "suffix": "" }, { "first": "P", "middle": [], "last": "Surve", "suffix": "" }, { "first": "L", "middle": [], "last": "De Silva", "suffix": "" }, { "first": "N", "middle": [], "last": "Gilbert", "suffix": "" }, { "first": "R", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "704--714", "other_ids": {}, "num": null, "urls": [], "raw_text": "Riloff, E., Qadir, A., Surve, P., De Silva, L., Gilbert, N., and Huang, R. (2013). Sarcasm as contrast between a posi- tive sentiment and negative situation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 704-714.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "SemEval-2017 task 4: Sentiment analysis in Twitter", "authors": [ { "first": "S", "middle": [], "last": "Rosenthal", "suffix": "" }, { "first": "N", "middle": [], "last": "Farra", "suffix": "" }, { "first": "P", "middle": [], "last": "Nakov", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 11th International Workshop on Semantic Evaluation, SemEval '17", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rosenthal, S., Farra, N., and Nakov, P. (2017). SemEval- 2017 task 4: Sentiment analysis in Twitter. In Proceed- ings of the 11th International Workshop on Semantic Evaluation, SemEval '17, Vancouver, Canada, August. Association for Computational Linguistics.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Semeval-2018 task 3: Irony detection in english tweets", "authors": [ { "first": "C", "middle": [], "last": "Van Hee", "suffix": "" }, { "first": "E", "middle": [], "last": "Lefever", "suffix": "" }, { "first": "V", "middle": [], "last": "Hoste", "suffix": "" } ], "year": 2018, "venue": "Proceedings of The 12th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "39--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Van Hee, C., Lefever, E., and Hoste, V. (2018). Semeval- 2018 task 3: Irony detection in english tweets. In Pro- ceedings of The 12th International Workshop on Seman- tic Evaluation, pages 39-50.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "The pragmatics of verbal irony: Echo or pretence?", "authors": [ { "first": "D", "middle": [], "last": "Wilson", "suffix": "" } ], "year": 2006, "venue": "Lingua", "volume": "116", "issue": "10", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wilson, D. (2006). The pragmatics of verbal irony: Echo or pretence? Lingua, 116(10):1722.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "shows the sentiment distribution over the sarcastic tweets. It is clear that most of the sarcastic tweetsDialectNon-Sarcastic Sarcastic Negative Neutral Positive Total", "type_str": "figure", "uris": null, "num": null }, "FIGREF1": { "text": "Figure 3shows how the new labels are different from the original ones, labels above the charts are the original ones. It is clear that there is an extreme change Sentiment distribution over the sarcastic tweets.", "type_str": "figure", "uris": null, "num": null }, "FIGREF2": { "text": "The change in sentiment labels between the original and new annotation. The labels above the charts are the original labels.", "type_str": "figure", "uris": null, "num": null }, "FIGREF3": { "text": "s reputation is on the line ... A real problem in iPhone 7)", "type_str": "figure", "uris": null, "num": null }, "TABREF1": { "text": "", "num": null, "html": null, "content": "
MSA, Sarcastic | |
Levant, Sarcastic | 9% |
21% | |
Egypt, Sarcastic | |
34% | |
Gulf, Sarcastic | |
24% | |
Maghreb, | |
Sarcastic | |
38% |