{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:06:15.046035Z" }, "title": "An Arabic Tweets Sentiment Analysis Dataset (ATSAD) using Distant Supervision and Self Training", "authors": [ { "first": "Kathrein", "middle": [ "Abu" ], "last": "Kwaik", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Gothenburg", "location": { "country": "Sweden" } }, "email": "kathrein.abu.kwaik@gu.se" }, { "first": "Motaz", "middle": [], "last": "Saad", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Islamic University of Gaza", "location": { "country": "Palestine" } }, "email": "motaz.saad@gmail.com" }, { "first": "Stergios", "middle": [], "last": "Chatzikyriakidis", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Gothenburg", "location": { "country": "Sweden" } }, "email": "stergios.chatzikyriakidis@gu.se" }, { "first": "Simon", "middle": [], "last": "Dobnik", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Gothenburg", "location": { "country": "Sweden" } }, "email": "simon.dobnik@gu.se" }, { "first": "Richard", "middle": [], "last": "Johansson", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Gothenburg", "location": { "country": "Sweden" } }, "email": "richard.johansson@gu.se" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "As the number of social media users increases, they express their thoughts, needs, socialise and publish their opinions. For good social media sentiment analysis, good quality resources are needed, and the lack of these resources is particularly evident for languages other than English, in particular Arabic. The available Arabic resources lack of from either the size of the corpus or the quality of the annotation. In this paper, we present an Arabic Sentiment Analysis Corpus collected from Twitter, which contains 36K tweets labelled into positive and negative. We employed distant supervision and self-training approaches into the corpus to annotate it. Besides, we release an 8K tweets manually annotated as a gold standard. We evaluated the corpus intrinsically by comparing it to human classification and pre-trained sentiment analysis models. Moreover, we apply extrinsic evaluation methods exploiting sentiment analysis task and achieve an accuracy of 86%.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "As the number of social media users increases, they express their thoughts, needs, socialise and publish their opinions. For good social media sentiment analysis, good quality resources are needed, and the lack of these resources is particularly evident for languages other than English, in particular Arabic. The available Arabic resources lack of from either the size of the corpus or the quality of the annotation. In this paper, we present an Arabic Sentiment Analysis Corpus collected from Twitter, which contains 36K tweets labelled into positive and negative. We employed distant supervision and self-training approaches into the corpus to annotate it. Besides, we release an 8K tweets manually annotated as a gold standard. We evaluated the corpus intrinsically by comparing it to human classification and pre-trained sentiment analysis models. Moreover, we apply extrinsic evaluation methods exploiting sentiment analysis task and achieve an accuracy of 86%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Companies and businesses stakeholders reach out to their customers through Social Media not only for advertising and marketing purposes, but also to get customer feedback concerning products or services. This is one of the main reasons that sentiment analysis applications have become increasingly sought out by the industry field. Even though sentiment analysis programs are widely used in the commercial sector, they have many other important uses, including political orientation analysis, electoral programs and decision-making. Sentiment Analysis is the process of automatically mining attitudes, opinions, views and emotions from the text, speech, tweets using Natural Language Processing (NLP) and machine learning (Liu, 2012) . Sentiment analysis involves classifying opinions into different classes like positive, negative, mixed or neutral. It can also refer to Subjectivity Analysis, i.e. the task of distinguishing between objective and subjective text. There are so many Arabic speakers in the world and they speak different varieties of Arabic depending on the region but with only one variety that is standardised namely, Modern Standard Arabic MSA. Social media is prevalent and it is particularly this domain where the local varieties are used and for which the resources are most limited. The total number of monthly active Twitter users in the Arab region is estimated at 11.1 million in March 2017, generating 27.4 million tweets per day according to weedoo. 1 Arabic, especially dialects, still looking for more efficient resources that can be used for the needs of NLP tasks. One of the biggest challenges in the construction of Arabic NLP resources is the big variation found in Arabic language where there are Modern Standard Arabic (MSA), Classical Arabic (CA) and the dialects. This has the result, that, in some tasks, it might be necessary to build stand-alone resources for each individual variation where the available 1 https://weedoo.tech/twitter-arab-world-statistics-feb-2017/ tools have been built for MSA can not be adapted for dialects and vice-verse (Qwaider et al., 2019) . In addition, building resources requires sufficient time and funding to produce highly efficient resources. Moreover, deep learning NLP methods require a huge amount of data. As a result of the unique Twitter features that are widely used to express opinions, views, thoughts and feelings, we therefore present Arabic Tweets Sentiment Analysis Dataset (AT-SAD) contains 36k tweets classified as positive or negative.", "cite_spans": [ { "start": 722, "end": 733, "text": "(Liu, 2012)", "ref_id": "BIBREF21" }, { "start": 2088, "end": 2110, "text": "(Qwaider et al., 2019)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The contributions of this paper can be highlighted under two headings: a) resource creation and b) resource evaluation. Regarding resource creation, we introduce a sentiment analysis dataset collected from Twitter, and as for resource evaluation, we introduce a method that combines the distant supervision approach with self-training to build a dataset that satisfies the size and quality requirements. In order to annotate a large number of tweets, we employ the distant supervision approach where the emojis are used as a weak noisy label. We manually annotate a subset of 8k tweets of the dataset and offer it as gold standard dataset. In order to improve the quality of the corpus, we apply the self-training techniques on the dataset and combine it with the distant supervision approach as a double check approach. Using our proposed double check approach, we achieve an accuracy of 86% on the sentiment analysis task. The dataset is available online for research usage. 2 The rest paper is organised as follows: Section 2 reviews some related works in term of sentiment analysis and social media resources. In Section 3, the challenges of processing Twitter text are presented and in Section 4, the details of collecting and creating the tweets dataset are presented. We evaluate the dataset in Section 5. Sections 6 and 7 are the conclusion and future work sections respectively.", "cite_spans": [ { "start": 977, "end": 978, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Arabic Sentiment analysis (ASA) has received considerable attention in terms of resource creation (Rushdi-Saleh et al., 2011; Aly and Atiya, 2013; Abdul-Mageed et al., 2014; Elnagar et al., 2018) . These resources are collected from different sources such as (blogs, reviews, tweets, comments, etc.) and involve a mix of Arabic vernacular and classical Arabic. Furthermore, they have been used extensively in research on SA for Arabic such as (Al Shboul et al., 2015; Obaidat et al., 2015; Al-Twairesh et al., 2016) . Most NLP work on SA uses machine learning classifiers with feature engineering. For example (Azmi and Alzanin, 2014; El-Beltagy et al., 2016) used machine learning classifiers on polarity and subjectivity classifications. However, recent papers (Al Sallab et al., 2015; Dahou et al., 2016; Alayba et al., 2018) investigated the use of Deep Neural Networks for Arabic sentiment analysis. Most of the datasets are collected from web blogs and customer reviews. Some are manually annotated following a specific annotation guidelines, while other corpora like LABR (Aly and Atiya, 2013) depend on the stars ratings done by users where the stars are used as polarity labels, the 5 stars denote a high positive, 1 star denotes a high negative and the 3 stars indicate the neutral and mixed label. In the AraSenTi-tweets corpus (Al-Twairesh et al., 2017), many approaches to collect the tweets were adopted, e.g the utilisation of emoticons, sentiment hashtags as well as the sentiment keywords. Then, the authors only keep the tweets that have their location set to a Saudi location. The dataset is manually annotated and sets some annotation guidelines. It contains 17 573 tweets each of which is classified to one of four classes (positive, negative, mixed or neutral). A sentiment baseline is built depending on TFIDF and using SVM with a linear kernel which achieved an F-score of 60.05%. In (Nabil et al., 2015) , the authors presented the Arabic Sentiment Tweets Dataset (ASTD). It is a dataset of 10,000 Egyptian tweets. It is composed of 799 positive, 1,684 negative, 832 mixed and 6,691 neutral tweets. The authors also conducted a set of benchmark experiments for four way sentiment classification as (positive, negative, mixed, neutral) and two-way sentiment classification as (positive, negative). When focusing on two-way classification, the corpus is unbalanced and small to be useful for the two-way sentiment analysis task. A corpus for Jordanian tweets is also presented in (Atoum and Nouman, 2019) . The authors collected tweets according to location, and then they filtered them to collect different types of terminologies to identify Jordanian Arabic dialect keywords efficiently. The corpus contains 3,550 Jordanian dialect tweets manually annotated as follows: 616 positive tweets, 1,313 negative tweets, and 1,621 neutral tweets. They conducted several experiments both with and without stemming/rooting applying them to several models with uni-grams/bi-grams and trying NB and SVM classifiers. The result shows that the SVM classifier performs better than the NB classifier. The ROC performance reached an average of 0.71, 0.77 on NB and SVM respectively on all experiments. A similar corpus for Levantine dialects is presented in Shami-Senti (Qwaider et al., 2019).", "cite_spans": [ { "start": 98, "end": 125, "text": "(Rushdi-Saleh et al., 2011;", "ref_id": "BIBREF28" }, { "start": 126, "end": 146, "text": "Aly and Atiya, 2013;", "ref_id": "BIBREF9" }, { "start": 147, "end": 173, "text": "Abdul-Mageed et al., 2014;", "ref_id": "BIBREF0" }, { "start": 174, "end": 195, "text": "Elnagar et al., 2018)", "ref_id": "BIBREF16" }, { "start": 259, "end": 299, "text": "(blogs, reviews, tweets, comments, etc.)", "ref_id": null }, { "start": 443, "end": 467, "text": "(Al Shboul et al., 2015;", "ref_id": null }, { "start": 468, "end": 489, "text": "Obaidat et al., 2015;", "ref_id": "BIBREF24" }, { "start": 490, "end": 515, "text": "Al-Twairesh et al., 2016)", "ref_id": "BIBREF5" }, { "start": 610, "end": 634, "text": "(Azmi and Alzanin, 2014;", "ref_id": "BIBREF11" }, { "start": 635, "end": 659, "text": "El-Beltagy et al., 2016)", "ref_id": "BIBREF15" }, { "start": 763, "end": 787, "text": "(Al Sallab et al., 2015;", "ref_id": "BIBREF2" }, { "start": 788, "end": 807, "text": "Dahou et al., 2016;", "ref_id": "BIBREF14" }, { "start": 808, "end": 828, "text": "Alayba et al., 2018)", "ref_id": "BIBREF7" }, { "start": 1079, "end": 1100, "text": "(Aly and Atiya, 2013)", "ref_id": "BIBREF9" }, { "start": 1908, "end": 1928, "text": "(Nabil et al., 2015)", "ref_id": "BIBREF23" }, { "start": 2503, "end": 2527, "text": "(Atoum and Nouman, 2019)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "It has approximately 2.5k posts from social media sites in general topics classified manually as positive, negative and neutral. The corpus is still under development. Recently, a 40K tweets dataset is presented in (Mohammed and Kora, 2019) . The authors extracted tweets written in Arabic. After that, they reprocessed the tweets and cleaned them very carefully by two experts, they corrected every misspelling words and removed all the repeating characters, in addition to the normal cleaning steps like normalisation. The total size of the dataset is 40,000 tweets classified into positive and negative equally. The corpus is considered a reliable resource but by manually cleaning all the data, it turns to a very hard crafted corpus where the resulted clean corpus differ than the real tweets, where the goal of cleaning is to normalise text and remove spelling mistakes but keep the style of the author. This has been normalised too much in this corpus and hence important information was lost. Even though in most of the Arabic tweet corpus creation procedures, the authors used the emoticons to extract as many sentiment tweets as possible such as (Al-Twairesh et al., 2017; Refaee and Rieser, 2014) , however none of them using the emojis and the emoticons as a sentiment label. An emoticon is built from keyboard characters that when put together in a certain way represent a facial expression like :) ;) :( and so on, while an emoji is an actual image 3 . The Stanford Twitter Sentiment (STS), is one of the most well-known dataset for English Twitter sentiment analysis (Go et al., 2009) . The dataset provides training and testing sets. The tweets were collected on the condition to contain at least one emoticon. Then they automatically classified the tweets in regard to the emoticons to positive and negative. The process resulted in a training set of 1.6 million annotated tweets and a test set of 359 manually annotated tweets that are used as a gold standard. The data set has been extensively used for different tasks related to sentiment analysis and subjectivity classification (Bravo-Marquez et al., 2013; Saif et al., 2012; Bakliwal et al., 2012; Speriosu et al., 2011) . Refaee and Rieser (2014) presented Arabic subsets of tweets using emoticons, hashtags and keywords. They apply distant supervision on the emoticons subset. After the evaluation process, they get an accuracy 95% and 51% for subjectivity analysis and sentiment classification respectively. They comment that emoticons can be used efficiently with subjectivity detection but not for the polarity classification task. As obvious from the previous discussion, these corpora or dataset have lacked some aspect. They have some limitation in term of the size of the corpus as ASTD, the number of presented dialects as AraSenti and the annotation procedure like LABR. We are looking for Arabic sentiment analysis corpus that concerns the Arabic social media text and that handles multiple dialects in a reasonable number of instances size to conduct experiments and find a way to do the annotation as accurate as possible. In this paper, and similarly to STS (Go et al., 2009), we constructed a dataset based on emojis for extracting and classifying tweets. Additionally, we manually annotated 20% of this data, which can then be used as a gold standard for any tweets sentiment analysis task and as the test set for our corpus.", "cite_spans": [ { "start": 215, "end": 240, "text": "(Mohammed and Kora, 2019)", "ref_id": "BIBREF22" }, { "start": 1156, "end": 1182, "text": "(Al-Twairesh et al., 2017;", "ref_id": "BIBREF6" }, { "start": 1183, "end": 1207, "text": "Refaee and Rieser, 2014)", "ref_id": "BIBREF26" }, { "start": 1582, "end": 1599, "text": "(Go et al., 2009)", "ref_id": "BIBREF18" }, { "start": 2100, "end": 2128, "text": "(Bravo-Marquez et al., 2013;", "ref_id": "BIBREF13" }, { "start": 2129, "end": 2147, "text": "Saif et al., 2012;", "ref_id": "BIBREF29" }, { "start": 2148, "end": 2170, "text": "Bakliwal et al., 2012;", "ref_id": "BIBREF12" }, { "start": 2171, "end": 2193, "text": "Speriosu et al., 2011)", "ref_id": "BIBREF30" }, { "start": 2196, "end": 2220, "text": "Refaee and Rieser (2014)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "Natural language processing must be adapted to the type of text to be processed (formal, scientific, colloquial), but furthermore, humans differ in the way they write in that specific type of text. This variety in writing style has increased with the advent of social media, where people are using their style of writing and daily conversational language to post, reply, or tweet more often. In addition to specific idiosyncrasies of Arabic in terms of processing, Twitter has unique features that make tweets have different characteristics from other social media (Alwakid et al., 2017; Giachanou and Crestani, 2016) . Detecting sentiment in social media text in general and Twitter in particular is a non-trivial task. There are many challenges as follows:", "cite_spans": [ { "start": 565, "end": 587, "text": "(Alwakid et al., 2017;", "ref_id": "BIBREF8" }, { "start": 588, "end": 617, "text": "Giachanou and Crestani, 2016)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Challenges of processing text from social media", "sec_num": "3." }, { "text": "\u2022 The short text length is the unique characteristics of tweets, which can be up to 280 characters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Challenges of processing text from social media", "sec_num": "3." }, { "text": "\u2022 Due to the constraint on the length of the tweet (280 characters), users tend to employ abbreviations in the tweets to make room for other words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Challenges of processing text from social media", "sec_num": "3." }, { "text": "\u2022 Tweets, as well as other social media text, are an example of User Generated Content, and contain unstructured language, orthographic mistakes, use of slang words, a lot of ironic and sarcastic sentences, abbreviations and many idiomatic expressions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Challenges of processing text from social media", "sec_num": "3." }, { "text": "\u2022 Analysing Arabic tweets in specific is a challenging task due to the use of Arabic dialects in tweets which (due to the lack of standard orthography) results to a lot of spelling inconsistencies. Moreover, the lack of capitalisation and diacritics, as well as the usage of connected words like increase the complexity of processing Arabic tweets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Challenges of processing text from social media", "sec_num": "3." }, { "text": "\u2022 The extensive of use of misspellings Arabic result in a Data Sparsity, that has an impact on the overall performance of SA systems. Saif et al. (2012) propose a semantic smoothing model by extracting semantically hidden concepts from tweets and then incorporate them into supervised classifier training through interpolation to reduce the sparseness in English tweets.", "cite_spans": [ { "start": 134, "end": 152, "text": "Saif et al. (2012)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Challenges of processing text from social media", "sec_num": "3." }, { "text": "\u2022 Many Arabic tweets are verses from the Holy Quran. There prayers to refer to different situations with different meanings are used, for example, , which in English means Mam I miss you a lot. I ask God to have mercy on you and to bring us together in heaven, even though it ostensibly carries a positive meaning of empathy and paradise, it carries negative feelings of longing and loss due to death.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Challenges of processing text from social media", "sec_num": "3." }, { "text": "To create and build the sentiment analysis corpus or datasets, we first build a sentiment emoji lexicon. The lex-icon contains both positive and negative emojis expressing the feelings corresponding to different sentiment categories. We collect the emojis as well as their indicated sentiment from \"Emojis Sentiment Ranking Lexicon\" (Kralj Novak et al., 2015) which is available at http://kt. ijs.si/data/Emoji_sentiment_ranking/ and Emojipedia 4 . Then, this lexicon is employed as the seed for the Twitter retrieval procedure. The Lexicon is composed of 91 negative emojis and 306 positive emojis. Instead of collecting tweets by hashtags or query terms we exploit the emojis and their assigned sentiment and condition the tweet language set to Arabic. We extracted 59k of the tweets using the Twitter API in April 2019. The corpus contains multiple dialects from all over the Arab world as it is not geographically constrained. To automatically annotate the tweets either as positive or negative, we use the emojis as a noisy (weak) label. If the tweet is fetched by the positive emojis from the lexicon like then it is labelled as positive and the tweets fetched by the negative lexicon are labelled as negative.", "cite_spans": [ { "start": 333, "end": 359, "text": "(Kralj Novak et al., 2015)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Arabic Tweets Sentiment Analysis Dataset (ATSAD)", "sec_num": "4." }, { "text": "More specifically, we perform the following cleaning actions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Arabic Tweets Sentiment Analysis Dataset (ATSAD)", "sec_num": "4." }, { "text": "1. Remove all metadata generated by Twitter API like tweet id, username, time, location, RT ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Arabic Tweets Sentiment Analysis Dataset (ATSAD)", "sec_num": "4." }, { "text": "The process of building a resource is not limited to data collection, but it must be checked and verified in order to be trustworthy and used as a resource. In this section, we evaluate the Tweets corpus by introducing two well-known methodologies: Intrinsic and extrinsic evaluations. In intrinsic evaluation, the corpus is directly evaluated in terms of its accuracy and quality. We check whether the rule-based annotation (simply an emojis annotation) can be used to build a reliable corpus and use it effectively in the desired functionality. On the other hand, in extrinsic evaluation, the dataset is going to be assessed with respect to its impact on an external task which in our case is the sentiment analysis model (Resnik and Lin, 2010) .", "cite_spans": [ { "start": 724, "end": 746, "text": "(Resnik and Lin, 2010)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Corpus Evaluation", "sec_num": "5." }, { "text": "To check the quality of the corpus, we have asked two annotators, one an NLP expert, the second an educated native Arabic speaker, to annotate subsets of the corpus. We start with a random sample containing 180 instances (1% of the data) for both positive and negative classes. When the annotation was completed, the two annotators agreed on the 90% of the sample. In case of disagreement, we choose the expert annotator's choice as the class label. The annotation process is cumulative, in the sense that we pick random samples every time from the corpus and ask the annotators to annotate. For each sample we calculate the number of mismatched labels between the emoji-based annotation and the human annotation, and we also compute the accuracy of the emojibased annotation by taking the number of right classified instances divided by the total number of the sample. Table 2 shows the number of errors (mismatches) and accuracy for annotation samples in the range from 1% to 10% of the corpus. Figure 1 plots the accuracy results. It is clear that after manually annotating 10% of the whole corpus, the percentage of matches tweets between the human and the emoji-based annotation is 77.2%. Obtaining 77.2% is not good enough to use it for a task to predict the real sentiment of the tweets even though it is less time-consuming compared to manual annotation. Therefore, later we are going to present a combination method of self training and distant supervision to improve the quality of the dataset. Moreover, we check the quality of the corpus with pretrained sentiment analysis models that have been built and trained on existing datasets. The following datasets are used in our experiments and shown in Table 3: \u2022 40k dataset (Mohammed and Kora, 2019) : as mentioned in the related work section, this is a tweets dataset containing 40,000 instances. It is manually annotated into positive and negative and the tweets are subsequently manually cleaned.", "cite_spans": [ { "start": 1734, "end": 1759, "text": "(Mohammed and Kora, 2019)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 870, "end": 877, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 997, "end": 1005, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 1711, "end": 1719, "text": "Table 3:", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Corpus Evaluation", "sec_num": "5." }, { "text": "\u2022 LABR (Aly and Atiya, 2013) : a large SA dataset for Arabic sentiment analysis. The data are extracted from a book review website and contain over 63k book reviews written in MSA with some dialectal phrases. Given that our corpus concerns two-way classification, we only use the binary balanced subsets of LABR. LABR can be considered to be a human annotated corpus, where the users rate books using the stars system (1 to 5).", "cite_spans": [ { "start": 7, "end": 28, "text": "(Aly and Atiya, 2013)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Corpus Evaluation", "sec_num": "5." }, { "text": "Ratings of 4 and 5 stars are considered positive, ratings of 1 and 2 stars negative and 3-star ratings are taken as neutral. In the binary classification case, 3star ratings are ignored, keeping only the positive and negative labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Evaluation", "sec_num": "5." }, { "text": "\u2022 ASTD (Nabil et al., 2015) : an Arabic SA corpus collected from Twitter and focusing on Egyptian Arabic. It consists of approximately 10k tweets which are classified as objective, subjective positive, subjective negative, and subjective mixed. We use only the positive and negative subset.", "cite_spans": [ { "start": 7, "end": 27, "text": "(Nabil et al., 2015)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Corpus Evaluation", "sec_num": "5." }, { "text": "\u2022 Shami-Senti (Qwaider et al., 2019 We build a model on each corpus and apply the resulting model to our Twitter corpus. The model uses a combination of (1-3) word grams and a LinearSVC classifier. Table 4 shows the accuracy of the models built (trained and tested) on the original datasets, while the ASTAD column shows the accuracy of the trained model when we use it to predict the class on our Twitter dataset. It is clear that none of the models works for this dataset and the accuracy does not exceed 60%. This is an expected result, given that the data are from a very different domain, i.e. book reviews. Even though both ASTD and the ATSAD share the same domain, the ASTD only contains Egyptian dialects. In the case of the Shami corpus, it only contains Levantine dialects with a limited number of examples (2k). The 40k tweets model and ATSAD also share the same domain (tweets) but the manual hard prepossessing and cleaning of the data make it hard to predict real tweets as people post it, also the 40k corpus only has Egyptian dialect.", "cite_spans": [ { "start": 14, "end": 35, "text": "(Qwaider et al., 2019", "ref_id": "BIBREF25" } ], "ref_spans": [ { "start": 198, "end": 205, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Corpus Evaluation", "sec_num": "5." }, { "text": "Same corpus ATSAD 40k tweets 79% 60% LABR 82% 54% ASTD 81% 59% SHAMI-SENTI 84% 59% Table 4 : Accuracy of models trained on different SA corpora; the same corpus column indicates the accuracy of the model when the train dataset and the test dataset are both from the same corpus, the last column for the accuracy when we test the models on the ATSAD Summing up, it is clear from the previous discussion that the ATSAD is a challenge for the models trained on the available datasets that are standardised and regularised. Therefore we have to create an ML model that would be successful on this ATSAD. To achieve a good accuracy on the model, then the dataset should be improved in term of the data quality and annotation quality.", "cite_spans": [], "ref_spans": [ { "start": 83, "end": 90, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Corpus Evaluation", "sec_num": "5." }, { "text": "Creating a good resource requires the collection of a big amount of data that are preprocessed and annotated. The annotation is usually done by hiring annotators and specifying annotation rules they have to follow to produce a reasonable annotation agreement. This process is time and money consuming. There is another approach to build a large enough dataset more quickly. The process is called Distant supervision or weak supervision (Yao et al., 2010) . Distant supervision involves heuristically matching the contents of a database to the corresponding text (Hoffmann et al., 2011) . In our case, we use the emojis in the tweets to work as weak labels with which we can annotate the 36K tweets automatically. Although this is sometimes not producing high-quality dataset, it works in some tasks. We annotate the 36k tweets by distant supervision and then extract 4k tweets (10% of the total dataset). We ask the two annotators to label them manually. We compute the number of agreed annotation between the human annotation and the emojis annotation we have an agreement of 77.2%. To use the human annotation dataset as a gold standard we extract other 4K tweets and also manually annotate them, upgrading the final manually annotated dataset to 8k tweets of which 3705 are classified as positive, 3911 negative and 384 instances are mixed. We exclude the mixed class from our experiments. We build a baseline with TF-IDF unigram word model and a Linear-SVC classifier. Moreover, we build another complex model -from some previous work -by combining word n-grams (1-5), character n-grams (2-5) with and without word boundary consideration (Qwaider et al., 2019) . The models are built for sentiment analysis and the problem is recognised as two-way classification, so every tweet is classified either as positive or negative. Table 5 shows the number of tweets per class for the human annotation dataset and the remaining tweets in the emojis dataset which were weakly annotated by the distant supervision. We apply both the baseline and the complex model on the manually annotated dataset and we get an accuracy of %71 and %79 respectively. We refer to this experiment as (Manual experiment). To check again the quality of the emojis based dataset we applied the previous model trained on the human labels on the emojis dataset of 29k tweets to predict the label. After testing the two models, the resulted accuracy is %63 and %76 for both the baseline and the complex model respectively (Mixed experiment). The mixed experiment is to some extent similar to the agreement between the manual annotation and the emojis annotation experiment we have done first and got an accuracy of 76% using 4k subset.", "cite_spans": [ { "start": 436, "end": 454, "text": "(Yao et al., 2010)", "ref_id": "BIBREF31" }, { "start": 562, "end": 585, "text": "(Hoffmann et al., 2011)", "ref_id": "BIBREF19" }, { "start": 1642, "end": 1664, "text": "(Qwaider et al., 2019)", "ref_id": "BIBREF25" } ], "ref_spans": [ { "start": 1829, "end": 1836, "text": "Table 5", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Self training on Distant supervision Corpus", "sec_num": "6." }, { "text": "To improve the quality of the automatic annotation and therefore the proposed tweets corpus, we will exploit the manual annotation dataset to enhance the entire dataset. Therefore, a self-training approach is to be employed on the data to improve the classification and increase the accuracy of the annotation. Self-training is a commonly used method for semi-supervised learning (Yarowsky, 1995; Abney, 2002) . The idea of Self-training is to train a classifier with a small amount of labelled data and incrementally retrain the classifier by adding the most confidently labelled instances that were previously unlabelled as a new data. This process continues until most of the unlabelled data becomes labelled (Gao et al., 2014) . We can implement a selftraining technique with little modification of the existing configuration: our dataset is not completely unlabelled but has weak emoji-based annotations. From the mixed model experiments, rather than extracting the instances predicted with the highest confidence, we extract instances where the Figure 2 : Self training (double-check) approach applied on the TSAD model prediction label matches the emojis label. This is the case for 22,542 out of 29,252 tweets in the dataset. We add these tweets to the training set which consists of the human annotated dataset (6,092). Thus, to re-train the classifier we have a total of 22,542 + 6,092 = 28,634 tweets. We call this experiment (double check) where we combine the self training with distant supervision. The 28K tweets are now a dataset with strong supervised labelling where the small amount of human annotation dataset and distant supervising from emojis helps to annotate more data. We re-build both the baseline and the complex models and retrain them on the dataset we produced from the double check experiment (28k tweets), then apply the model to the test set from the human annotation dataset (1,524 tweets).We use the same dataset across all the experiments in order to allow for the comparison. The baseline and the complex model accuracy increases to 77% and 86% respectively. Figure 2 shows the diagram for the self-training approach.", "cite_spans": [ { "start": 380, "end": 396, "text": "(Yarowsky, 1995;", "ref_id": "BIBREF32" }, { "start": 397, "end": 409, "text": "Abney, 2002)", "ref_id": "BIBREF1" }, { "start": 712, "end": 730, "text": "(Gao et al., 2014)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 1051, "end": 1059, "text": "Figure 2", "ref_id": null }, { "start": 2097, "end": 2105, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Self training on Distant supervision Corpus", "sec_num": "6." }, { "text": "To evaluate our self-training experiment and our method to extract only those instances where the model prediction matches the emojis annotation, we conduct a small experiment of self-training called (Non-check) where we:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Self training on Distant supervision Corpus", "sec_num": "6." }, { "text": "1. Use the model from the (mixed experiment) to predict the label for the automatically labelled dataset (29k tweets).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Self training on Distant supervision Corpus", "sec_num": "6." }, { "text": "2. Retrain the model with the human annotated training dataset in addition to the predicted labelled dataset (from the previous model). Thus, this re-train dataset consists of 6,092 + 29,252 = 35,341 tweets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Self training on Distant supervision Corpus", "sec_num": "6." }, { "text": "3. Use the manually annotated test set (1524 tweets) and use the model to predict the sentiment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Self training on Distant supervision Corpus", "sec_num": "6." }, { "text": "4. The accuracy of the baseline is 70% and 81% for the complex model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Self training on Distant supervision Corpus", "sec_num": "6." }, { "text": "Consequently, it is clear that (i) using the emojis as a noisy label, (ii) matching with the human annotation and (iii) apply the self training technique to annotate the dataset leads to an improvement of the data. Table 6 shows the performance of the models on different datasets. These are represented as plots in Figure 3 . Table 6 : The performance of the baseline and complex models on different datasets. When we were done with the experiments, we extracted all the emojis and examined the emoji frequencies per category. We found some emojis are shared between the positive and the negative class, such as the smiley face with tears. We also discover that people used the black smiley face to indicate the negative feeling more often than the positive. These emojis are considered tricky emojis and they decrease the quality of the annotation. We modified our conditions by removing all the misleading emojis to collect more accurate data. Up to now we have collected over 200k tweets. Table 7 views the number of occurrence for the most 10 frequent emojis per sentiment category. Table 7 : Number of occurrence for the most 10 frequent emojis per category, the last row show the total number of the whole emojis in the dataset per category", "cite_spans": [], "ref_spans": [ { "start": 215, "end": 222, "text": "Table 6", "ref_id": null }, { "start": 316, "end": 324, "text": "Figure 3", "ref_id": "FIGREF1" }, { "start": 327, "end": 334, "text": "Table 6", "ref_id": null }, { "start": 993, "end": 1000, "text": "Table 7", "ref_id": null }, { "start": 1088, "end": 1095, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Self training on Distant supervision Corpus", "sec_num": "6." }, { "text": "Based on our emojis analysis and the subsequent modification of the data collection and annotation conditions, we are planning to further increase the size of the dataset and use it for different tasks like building custom sentiment word embeddings and to fine-tune deep learning networks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future work", "sec_num": "7." }, { "text": "To extend the limited Dialectal Arabic resources, we collected an Arabic Tweets Sentiment Analysis Dataset (AT-SAD). The corpus has been collected from Twitter during April 2019 and employs emojis as seeds for extraction of candidate instances. After the pre-processing, we apply distant supervision using emojis as weak labels to annotate the entire dataset. In addition, we commissioned two annotators to manually annotate a subset of 8k tweets. We evaluate the corpus by comparing the emoji-based annotation with the human annotation and we get an observed agreement of 77.2%. We built a sentiment analysis machine learning model with the unigram features as a baseline and another complex model that utilises word grams and character grams. We exploit the human annotation dataset to help us improve the annotation of the automatically labelled dataset by self-training approaches. Over several experiments we achieve an accuracy of 86%. Using the distant supervision approaches for automatically data annotation process can saves us a lot of effort, time and money. Distant supervision is a very valuable method to annotate large number of instances automatically, in our case based on emojis to denote the category. The self training approach can be used together with a small number of manually annotated instances to improve the quality of the automatically labelled dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8." }, { "text": "https://github.com/motazsaad/arabic-sentiment-analysis", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://grammarist.com/new-words/emoji-vs-emoticon/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://emojipedia.org/people/emojis", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Kathrein Abu Kwaik, Stergios Chatzikyriakidis and Simon Dobnik are supported by grant 2014-39 from the Swedish Research Council, which funds the Centre for Linguistic Theory and Studies in Probability (CLASP) in the Department of Philosophy, Linguistics, and Theory of Science at the University of Gothenburg. Richard Johansson was funded by the Swedish Research Council under grant 2013-4944.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Samar: Subjectivity and sentiment analysis for Arabic social media", "authors": [ { "first": "M", "middle": [], "last": "Abdul-Mageed", "suffix": "" }, { "first": "M", "middle": [], "last": "Diab", "suffix": "" }, { "first": "S", "middle": [], "last": "K\u00fcbler", "suffix": "" } ], "year": 2014, "venue": "Computer Speech & Language", "volume": "28", "issue": "1", "pages": "20--37", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abdul-Mageed, M., Diab, M., and K\u00fcbler, S. (2014). Samar: Subjectivity and sentiment analysis for Arabic social media. Computer Speech & Language, 28(1):20- 37.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Bootstrapping", "authors": [ { "first": "S", "middle": [], "last": "Abney", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th annual meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "360--367", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abney, S. (2002). Bootstrapping. In Proceedings of the 40th annual meeting of the Association for Computa- tional Linguistics, pages 360-367.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Deep learning models for sentiment analysis in Arabic", "authors": [ { "first": "Al", "middle": [], "last": "Sallab", "suffix": "" }, { "first": "A", "middle": [], "last": "Hajj", "suffix": "" }, { "first": "H", "middle": [], "last": "Badaro", "suffix": "" }, { "first": "G", "middle": [], "last": "Baly", "suffix": "" }, { "first": "R", "middle": [], "last": "El Hajj", "suffix": "" }, { "first": "W", "middle": [], "last": "Shaban", "suffix": "" }, { "first": "K", "middle": [ "B" ], "last": "", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the second workshop on Arabic natural language processing", "volume": "", "issue": "", "pages": "9--17", "other_ids": {}, "num": null, "urls": [], "raw_text": "Al Sallab, A., Hajj, H., Badaro, G., Baly, R., El Hajj, W., and Shaban, K. B. (2015). Deep learning models for sentiment analysis in Arabic. In Proceedings of the sec- ond workshop on Arabic natural language processing, pages 9-17.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Multi-way sentiment classification of Arabic reviews", "authors": [], "year": 2015, "venue": "6th International Conference on Information and Communication Systems (ICICS)", "volume": "", "issue": "", "pages": "206--211", "other_ids": {}, "num": null, "urls": [], "raw_text": "Multi-way sentiment classification of Arabic reviews. In 2015 6th International Conference on Information and Communication Systems (ICICS), pages 206-211. IEEE.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "AraSenTi: Large-Scale Twitter-Specific Arabic Sentiment Lexicons", "authors": [ { "first": "N", "middle": [], "last": "Al-Twairesh", "suffix": "" }, { "first": "H", "middle": [], "last": "Al-Khalifa", "suffix": "" }, { "first": "Al-Salman", "middle": [], "last": "", "suffix": "" }, { "first": "A", "middle": [], "last": "", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "697--705", "other_ids": {}, "num": null, "urls": [], "raw_text": "Al-Twairesh, N., Al-Khalifa, H., and Al-Salman, A. (2016). AraSenTi: Large-Scale Twitter-Specific Arabic Sentiment Lexicons. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguis- tics (Volume 1: Long Papers), pages 697-705, Berlin, Germany, August. Association for Computational Lin- guistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Arasenti-tweet: A corpus for Arabic sentiment analysis of saudi tweets", "authors": [ { "first": "N", "middle": [], "last": "Al-Twairesh", "suffix": "" }, { "first": "H", "middle": [], "last": "Al-Khalifa", "suffix": "" }, { "first": "A", "middle": [], "last": "Al-Salman", "suffix": "" }, { "first": "Al-Ohali", "middle": [], "last": "", "suffix": "" }, { "first": "Y", "middle": [], "last": "", "suffix": "" } ], "year": 2017, "venue": "Procedia Computer Science", "volume": "117", "issue": "", "pages": "63--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Al-Twairesh, N., Al-Khalifa, H., Al-Salman, A., and Al- Ohali, Y. (2017). Arasenti-tweet: A corpus for Arabic sentiment analysis of saudi tweets. Procedia Computer Science, 117:63-72.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A combined CNN and LSTM model for Arabic sentiment analysis", "authors": [ { "first": "A", "middle": [ "M" ], "last": "Alayba", "suffix": "" }, { "first": "V", "middle": [], "last": "Palade", "suffix": "" }, { "first": "M", "middle": [], "last": "England", "suffix": "" }, { "first": "R", "middle": [], "last": "Iqbal", "suffix": "" } ], "year": 2018, "venue": "International Cross-Domain Conference for Machine Learning and Knowledge Extraction", "volume": "", "issue": "", "pages": "179--191", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alayba, A. M., Palade, V., England, M., and Iqbal, R. (2018). A combined CNN and LSTM model for Ara- bic sentiment analysis. In International Cross-Domain Conference for Machine Learning and Knowledge Ex- traction, pages 179-191. Springer.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Challenges in sentiment analysis for Arabic social networks", "authors": [ { "first": "G", "middle": [], "last": "Alwakid", "suffix": "" }, { "first": "T", "middle": [], "last": "Osman", "suffix": "" }, { "first": "T", "middle": [], "last": "Hughes-Roberts", "suffix": "" } ], "year": 2017, "venue": "Procedia Computer Science", "volume": "117", "issue": "", "pages": "89--100", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alwakid, G., Osman, T., and Hughes-Roberts, T. (2017). Challenges in sentiment analysis for Arabic social net- works. Procedia Computer Science, 117:89-100.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Labr: A large scale Arabic book reviews dataset", "authors": [ { "first": "M", "middle": [], "last": "Aly", "suffix": "" }, { "first": "A", "middle": [], "last": "Atiya", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "494--498", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aly, M. and Atiya, A. (2013). Labr: A large scale Arabic book reviews dataset. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguis- tics (Volume 2: Short Papers), volume 2, pages 494-498.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Sentiment Analysis of Arabic Jordanian Dialect Tweets. (IJACSA) International", "authors": [ { "first": "J", "middle": [ "O" ], "last": "Atoum", "suffix": "" }, { "first": "M", "middle": [], "last": "Nouman", "suffix": "" } ], "year": 2019, "venue": "Journal of Advanced Computer Science and Applications", "volume": "10", "issue": "", "pages": "256--262", "other_ids": {}, "num": null, "urls": [], "raw_text": "Atoum, J. O. and Nouman, M. (2019). Sentiment Anal- ysis of Arabic Jordanian Dialect Tweets. (IJACSA) In- ternational Journal of Advanced Computer Science and Applications, 10:256-262.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Aara'-a system for mining the polarity of Saudi public opinion through e-newspaper comments", "authors": [ { "first": "A", "middle": [ "M" ], "last": "Azmi", "suffix": "" }, { "first": "S", "middle": [ "M" ], "last": "Alzanin", "suffix": "" } ], "year": 2014, "venue": "Journal of Information Science", "volume": "40", "issue": "3", "pages": "398--410", "other_ids": {}, "num": null, "urls": [], "raw_text": "Azmi, A. M. and Alzanin, S. M. (2014). Aara'-a system for mining the polarity of Saudi public opinion through e-newspaper comments. Journal of Information Science, 40(3):398-410.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Mining sentiments from tweets", "authors": [ { "first": "A", "middle": [], "last": "Bakliwal", "suffix": "" }, { "first": "P", "middle": [], "last": "Arora", "suffix": "" }, { "first": "S", "middle": [], "last": "Madhappan", "suffix": "" }, { "first": "N", "middle": [], "last": "Kapre", "suffix": "" }, { "first": "M", "middle": [], "last": "Singh", "suffix": "" }, { "first": "V", "middle": [], "last": "Varma", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 3rd Workshop in Computational Approaches to Subjectivity and Sentiment Analysis", "volume": "", "issue": "", "pages": "11--18", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bakliwal, A., Arora, P., Madhappan, S., Kapre, N., Singh, M., and Varma, V. (2012). Mining sentiments from tweets. In Proceedings of the 3rd Workshop in Compu- tational Approaches to Subjectivity and Sentiment Anal- ysis, pages 11-18.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Combining strengths, emotions and polarities for boosting twitter sentiment analysis", "authors": [ { "first": "F", "middle": [], "last": "Bravo-Marquez", "suffix": "" }, { "first": "M", "middle": [], "last": "Mendoza", "suffix": "" }, { "first": "B", "middle": [], "last": "Poblete", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Second International Workshop on Issues of Sentiment Discovery and Opinion Mining", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bravo-Marquez, F., Mendoza, M., and Poblete, B. (2013). Combining strengths, emotions and polarities for boost- ing twitter sentiment analysis. In Proceedings of the Sec- ond International Workshop on Issues of Sentiment Dis- covery and Opinion Mining, page 2. ACM.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Word embeddings and convolutional neural network for Arabic sentiment classification", "authors": [ { "first": "A", "middle": [], "last": "Dahou", "suffix": "" }, { "first": "S", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "J", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "M", "middle": [ "H" ], "last": "Haddoud", "suffix": "" }, { "first": "P", "middle": [], "last": "Duan", "suffix": "" } ], "year": 2016, "venue": "Proceedings of coling 2016, the 26th international conference on computational linguistics: Technical papers", "volume": "", "issue": "", "pages": "2418--2427", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dahou, A., Xiong, S., Zhou, J., Haddoud, M. H., and Duan, P. (2016). Word embeddings and convolutional neural network for Arabic sentiment classification. In Proceed- ings of coling 2016, the 26th international conference on computational linguistics: Technical papers, pages 2418-2427.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Combining lexical features and a supervised learning approach for Arabic sentiment analysis", "authors": [ { "first": "S", "middle": [ "R" ], "last": "El-Beltagy", "suffix": "" }, { "first": "T", "middle": [], "last": "Khalil", "suffix": "" }, { "first": "A", "middle": [], "last": "Halaby", "suffix": "" }, { "first": "Hammad", "middle": [], "last": "", "suffix": "" }, { "first": "M", "middle": [], "last": "", "suffix": "" } ], "year": 2016, "venue": "International Conference on Intelligent Text Processing and Computational Linguistics", "volume": "", "issue": "", "pages": "307--319", "other_ids": {}, "num": null, "urls": [], "raw_text": "El-Beltagy, S. R., Khalil, T., Halaby, A., and Hammad, M. (2016). Combining lexical features and a super- vised learning approach for Arabic sentiment analy- sis. In International Conference on Intelligent Text Pro- cessing and Computational Linguistics, pages 307-319. Springer.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Hotel Arabic-reviews dataset construction for sentiment analysis applications", "authors": [ { "first": "A", "middle": [], "last": "Elnagar", "suffix": "" }, { "first": "Y", "middle": [ "S" ], "last": "Khalifa", "suffix": "" }, { "first": "A", "middle": [], "last": "Einea", "suffix": "" } ], "year": 2018, "venue": "Intelligent Natural Language Processing: Trends and Applications", "volume": "", "issue": "", "pages": "35--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elnagar, A., Khalifa, Y. S., and Einea, A. (2018). Ho- tel Arabic-reviews dataset construction for sentiment analysis applications. In Intelligent Natural Language Processing: Trends and Applications, pages 35-52. Springer.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Semi-supervised sentiment classification with self-training on feature subspaces", "authors": [ { "first": "W", "middle": [], "last": "Gao", "suffix": "" }, { "first": "S", "middle": [], "last": "Li", "suffix": "" }, { "first": "Y", "middle": [], "last": "Xue", "suffix": "" }, { "first": "M", "middle": [], "last": "Wang", "suffix": "" }, { "first": "G", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2014, "venue": "Workshop on Chinese Lexical Semantics", "volume": "", "issue": "", "pages": "231--239", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gao, W., Li, S., Xue, Y., Wang, M., and Zhou, G. (2014). Semi-supervised sentiment classification with self-training on feature subspaces. In Workshop on Chi- nese Lexical Semantics, pages 231-239. Springer.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Like It or Not: A Survey of Twitter Sentiment Analysis Methods", "authors": [ { "first": "A", "middle": [], "last": "Giachanou", "suffix": "" }, { "first": "F", "middle": [], "last": "Crestani", "suffix": "" }, { "first": "A", "middle": [], "last": "Go", "suffix": "" }, { "first": "R", "middle": [], "last": "Bhayani", "suffix": "" }, { "first": "L", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2009, "venue": "CS224N Project Report", "volume": "49", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Giachanou, A. and Crestani, F. (2016). Like It or Not: A Survey of Twitter Sentiment Analysis Methods. ACM Comput. Surv., 49(2):28:1-28:41, June. Go, A., Bhayani, R., and Huang, L. (2009). Twitter sen- timent classification using distant supervision. CS224N Project Report, Stanford, 1(12):2009.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Knowledge-based weak supervision for information extraction of overlapping relations", "authors": [ { "first": "R", "middle": [], "last": "Hoffmann", "suffix": "" }, { "first": "C", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "X", "middle": [], "last": "Ling", "suffix": "" }, { "first": "L", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "D", "middle": [ "S" ], "last": "Weld", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "541--550", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hoffmann, R., Zhang, C., Ling, X., Zettlemoyer, L., and Weld, D. S. (2011). Knowledge-based weak supervision for information extraction of overlapping relations. In Proceedings of the 49th Annual Meeting of the Associ- ation for Computational Linguistics: Human Language Technologies-Volume 1, pages 541-550. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Sentiment of emojis", "authors": [ { "first": "Kralj", "middle": [], "last": "Novak", "suffix": "" }, { "first": "P", "middle": [], "last": "Smailovi\u0107", "suffix": "" }, { "first": "J", "middle": [], "last": "Sluban", "suffix": "" }, { "first": "B", "middle": [], "last": "Mozeti\u010d", "suffix": "" }, { "first": "I", "middle": [], "last": "", "suffix": "" } ], "year": 2015, "venue": "PLoS ONE", "volume": "10", "issue": "12", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kralj Novak, P., Smailovi\u0107, J., Sluban, B., and Mozeti\u010d, I. (2015). Sentiment of emojis. PLoS ONE, 10(12):e0144296.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Sentiment analysis and opinion mining. Synthesis lectures on human language technologies", "authors": [ { "first": "B", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2012, "venue": "", "volume": "5", "issue": "", "pages": "1--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liu, B. (2012). Sentiment analysis and opinion min- ing. Synthesis lectures on human language technologies, 5(1):1-167.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Deep learning approaches for Arabic sentiment analysis. Social Network Analysis and Mining", "authors": [ { "first": "A", "middle": [], "last": "Mohammed", "suffix": "" }, { "first": "R", "middle": [], "last": "Kora", "suffix": "" } ], "year": 2019, "venue": "", "volume": "9", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohammed, A. and Kora, R. (2019). Deep learning ap- proaches for Arabic sentiment analysis. Social Network Analysis and Mining, 9(1):52.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "ASTD: Arabic Sentiment Tweets Dataset", "authors": [ { "first": "M", "middle": [], "last": "Nabil", "suffix": "" }, { "first": "M", "middle": [], "last": "Aly", "suffix": "" }, { "first": "A", "middle": [], "last": "Atiya", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2515--2519", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nabil, M., Aly, M., and Atiya, A. (2015). ASTD: Arabic Sentiment Tweets Dataset. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2515-2519.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Enhancing the determination of aspect categories and their polarities in Arabic reviews using lexicon-based approaches", "authors": [ { "first": "I", "middle": [], "last": "Obaidat", "suffix": "" }, { "first": "R", "middle": [], "last": "Mohawesh", "suffix": "" }, { "first": "M", "middle": [], "last": "Al-Ayyoub", "suffix": "" }, { "first": "A.-S", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Y", "middle": [], "last": "Jararweh", "suffix": "" } ], "year": 2015, "venue": "2015 IEEE Jordan Conference on Applied Electrical Engineering and Computing Technologies (AEECT)", "volume": "", "issue": "", "pages": "1--6", "other_ids": {}, "num": null, "urls": [], "raw_text": "Obaidat, I., Mohawesh, R., Al-Ayyoub, M., Mohammad, A.-S., and Jararweh, Y. (2015). Enhancing the determi- nation of aspect categories and their polarities in Ara- bic reviews using lexicon-based approaches. In 2015 IEEE Jordan Conference on Applied Electrical Engi- neering and Computing Technologies (AEECT), pages 1-6. IEEE.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Can Modern Standard Arabic Approaches be used for Arabic Dialects? Sentiment Analysis as a Case Study", "authors": [ { "first": "C", "middle": [], "last": "Qwaider", "suffix": "" }, { "first": "S", "middle": [], "last": "Chatzikyriakidis", "suffix": "" }, { "first": "S", "middle": [], "last": "Dobnik", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 3rd Workshop on Arabic Corpus Linguistics", "volume": "", "issue": "", "pages": "40--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qwaider, C., Chatzikyriakidis, S., and Dobnik, S. (2019). Can Modern Standard Arabic Approaches be used for Arabic Dialects? Sentiment Analysis as a Case Study. In Proceedings of the 3rd Workshop on Arabic Corpus Linguistics, pages 40-50.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Evaluating distant supervision for subjectivity and sentiment analysis on Arabic twitter feeds", "authors": [ { "first": "E", "middle": [], "last": "Refaee", "suffix": "" }, { "first": "V", "middle": [], "last": "Rieser", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the EMNLP 2014 workshop on Arabic natural language processing (ANLP)", "volume": "", "issue": "", "pages": "174--179", "other_ids": {}, "num": null, "urls": [], "raw_text": "Refaee, E. and Rieser, V. (2014). Evaluating distant super- vision for subjectivity and sentiment analysis on Arabic twitter feeds. In Proceedings of the EMNLP 2014 work- shop on Arabic natural language processing (ANLP), pages 174-179.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "11. evaluation of NLP systems. The handbook of computational linguistics and natural language processing", "authors": [ { "first": "P", "middle": [], "last": "Resnik", "suffix": "" }, { "first": "J", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Resnik, P. and Lin, J. (2010). 11. evaluation of NLP sys- tems. The handbook of computational linguistics and natural language processing, 57.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "OCA: Opinion Corpus for Arabic", "authors": [ { "first": "M", "middle": [], "last": "Rushdi-Saleh", "suffix": "" }, { "first": "M", "middle": [ "T" ], "last": "Mart\u00edn-Valdivia", "suffix": "" }, { "first": "L", "middle": [ "A" ], "last": "Ure\u00f1a-L\u00f3pez", "suffix": "" }, { "first": "J", "middle": [ "M" ], "last": "Perea-Ortega", "suffix": "" } ], "year": 2011, "venue": "Journal of the American Society for Information Science and Technology", "volume": "62", "issue": "10", "pages": "2045--2054", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rushdi-Saleh, M., Mart\u00edn-Valdivia, M. T., Ure\u00f1a-L\u00f3pez, L. A., and Perea-Ortega, J. M. (2011). OCA: Opin- ion Corpus for Arabic. Journal of the American Society for Information Science and Technology, 62(10):2045- 2054.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Alleviating data sparsity for twitter sentiment analysis", "authors": [ { "first": "H", "middle": [], "last": "Saif", "suffix": "" }, { "first": "Y", "middle": [], "last": "He", "suffix": "" }, { "first": "Alani", "middle": [], "last": "", "suffix": "" }, { "first": "H", "middle": [], "last": "", "suffix": "" } ], "year": 2012, "venue": "CEUR Workshop Proceedings (CEUR-WS. org)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif, H., He, Y., and Alani, H. (2012). Alleviating data sparsity for twitter sentiment analysis. CEUR Workshop Proceedings (CEUR-WS. org).", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Twitter polarity classification with label propagation over lexical links and the follower graph", "authors": [ { "first": "M", "middle": [], "last": "Speriosu", "suffix": "" }, { "first": "N", "middle": [], "last": "Sudan", "suffix": "" }, { "first": "S", "middle": [], "last": "Upadhyay", "suffix": "" }, { "first": "J", "middle": [], "last": "Baldridge", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the First workshop on Unsupervised Learning in NLP", "volume": "", "issue": "", "pages": "53--63", "other_ids": {}, "num": null, "urls": [], "raw_text": "Speriosu, M., Sudan, N., Upadhyay, S., and Baldridge, J. (2011). Twitter polarity classification with label propa- gation over lexical links and the follower graph. In Pro- ceedings of the First workshop on Unsupervised Learn- ing in NLP, pages 53-63. Association for Computational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Collective cross-document relation extraction without labelled data", "authors": [ { "first": "L", "middle": [], "last": "Yao", "suffix": "" }, { "first": "S", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1013--1023", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yao, L., Riedel, S., and McCallum, A. (2010). Collec- tive cross-document relation extraction without labelled data. In Proceedings of the 2010 Conference on Em- pirical Methods in Natural Language Processing, pages 1013-1023. Association for Computational Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Unsupervised word sense disambiguation rivaling supervised methods", "authors": [ { "first": "D", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1995, "venue": "33rd annual meeting of the association for computational linguistics", "volume": "", "issue": "", "pages": "189--196", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yarowsky, D. (1995). Unsupervised word sense disam- biguation rivaling supervised methods. In 33rd annual meeting of the association for computational linguistics, pages 189-196.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": "Accuracy of dataset comparing to human annotation" }, "FIGREF1": { "num": null, "type_str": "figure", "uris": null, "text": "plotting the accuracy for all the experiments for both the baseline and complicated models" }, "TABREF0": { "type_str": "table", "text": "Table 1shows the statistics of the corpus before and after the pre-processing phase which gives us 36K tweets.", "content": "
2. Remove all special characters but not emojis
3. Remove non-Arabic characters
4. Remove links
5. Remove diacritics from the text
6. Remove duplicated tweets
Positive Negative TotalVocabs Words
Before 30,60729,23259,839 95,538 76,2673
After18,17318,69536,868 95,057 41,8857
", "num": null, "html": null }, "TABREF1": { "type_str": "table", "text": "Statistics of the Twitter sentiment analysis corpus (ATSAD) before and after the pre-processing", "content": "", "num": null, "html": null }, "TABREF3": { "type_str": "table", "text": "Human annotation accuracy compared to the emojis based annotation. The first two columns show the percentage and number of the sampled tweets, # error shows the number of mismatched samples and the Accuracy column calculates the percentage of the matches between both annotations.", "content": "
", "num": null, "html": null }, "TABREF5": { "type_str": "table", "text": "", "content": "
", "num": null, "html": null }, "TABREF7": { "type_str": "table", "text": "Statistics of the human annotation subset and the emojis distant supervision subset after subtract the human dataset", "content": "
", "num": null, "html": null } } } }