{ "paper_id": "2019", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:30:16.565833Z" }, "title": "A Deep Learning Approach for Automatic Detection of Fake News", "authors": [ { "first": "Tanik", "middle": [], "last": "Saikh", "suffix": "", "affiliation": { "laboratory": "", "institution": "Indian Institute of Technology", "location": { "settlement": "Patna" } }, "email": "" }, { "first": "Arkadipta", "middle": [], "last": "De", "suffix": "", "affiliation": {}, "email": "de.arkadipta05@gmail.com" }, { "first": "Asif", "middle": [], "last": "Ekbal", "suffix": "", "affiliation": { "laboratory": "", "institution": "Indian Institute of Technology", "location": { "settlement": "Patna" } }, "email": "" }, { "first": "Pushpak", "middle": [], "last": "Bhattacharyya", "suffix": "", "affiliation": { "laboratory": "", "institution": "Indian Institute of Technology", "location": { "settlement": "Patna" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Fake news detection is a very prominent and essential task in the field of journalism. This challenging problem is seen so far in the field of politics, but it could be even more challenging when it is to be determined in the multi-domain platform. In this paper, we propose two effective models based on deep learning for solving fake news detection problem in online news contents of multiple domains. We evaluate our techniques on the two recently released datasets, namely Fake-News AMT and Celebrity for fake news detection. The proposed systems yield encouraging performance, outperforming the current handcrafted feature engineering based state-of-theart system with a significant margin of 3.08% and 9.3% by the two models, respectively. In order to exploit the datasets, available for the related tasks, we perform cross-domain analysis (i.e. model trained on FakeNews AMT and tested on Celebrity and vice versa) to explore the applicability of our systems across the domains.", "pdf_parse": { "paper_id": "2019", "_pdf_hash": "", "abstract": [ { "text": "Fake news detection is a very prominent and essential task in the field of journalism. This challenging problem is seen so far in the field of politics, but it could be even more challenging when it is to be determined in the multi-domain platform. In this paper, we propose two effective models based on deep learning for solving fake news detection problem in online news contents of multiple domains. We evaluate our techniques on the two recently released datasets, namely Fake-News AMT and Celebrity for fake news detection. The proposed systems yield encouraging performance, outperforming the current handcrafted feature engineering based state-of-theart system with a significant margin of 3.08% and 9.3% by the two models, respectively. In order to exploit the datasets, available for the related tasks, we perform cross-domain analysis (i.e. model trained on FakeNews AMT and tested on Celebrity and vice versa) to explore the applicability of our systems across the domains.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In the emergence of social and news media, data are constantly being created day by day. The data so generated are enormous in amount, and often contains miss-information. Hence it is necessary to check it's truthfulness. Nowadays people mostly rely on social media and many other online news feeds as their only platforms for news consumption (Jeffrey and Elisa, 2016) . A survey from the Consumer News and Business Channel (CNBC) also reveals that more people are rely on social media for news consumption rather than news paper 1 . Therefore, in order to deliver the genuine news to such consumers, checking the truthfulness of such online news content is of utmost priority to news industries. The task is very difficult for a machine as even human being can not understand news article's veracity (easily) after reading the article. Prior works on fake news detection entirely rely on the datasets having satirical news contents sources, namely \"The Onion\" (Rubin et al., 2016) , fact checking website like Politi-Fact (Wang, 2017) , and Snopes (Popat et al., 2016) , and on the contents of the websites which track viral news such as BuzzFeed (Potthast et al., 2018) etc. But these sources have severe drawbacks and multiple challenges too. Satirical news mimic the real news which are having the mixture of irony and absurdity. Most of the works in fake news detection fall in this line and confine in one domain (i.e. politics). The task could be even more challenging and generic if we study this fake news detection problem in multiple domain scenarios. We endeavour to mitigate this particular problem of fake news detection in multiple domains. This task is even more challenging compared to the situation when news is taken only from a particular domain, i.e. uni-domain platform. We make use of the dataset which contained news contents from multiple domains. The problem definition would be as follows: Given a News Topic along with the corresponding News Body Document, the task is to classify whether the given news is legitimate/genuine or Fake. The work described in P\u00e9rez-Rosas et al. (2018) followed this path. They also offered two novel computational resources, namely FakeNews AMT and Celebrity news. These datasets are having triples of topic, document and label (Legit/Fake) from multiple domains (like Business, Education, Technology, Entertainment and Sports etc) including politics. Also, they claimed that these datasets focus on the deceptive properties of online articles from different domains. They provided a baseline model. The model is based on Support Vector Machine (SVM) that exploits the hand-crafted linguistics features. The SVM based model achieved the accuracies of 74% and 76% in the FakeNews AMT and Celebrity news datasets, respectively. We pose this problem as a classification problem. So the proposed predictive models are binary classification systems which aim to classify between fake and the verified content of online news from multiple domains. We solve the problem of multi-domain fake news detection using two variations of deep learning approaches. The first model (denoted as Model 1) is a Bidirectional Gated Recurrent Unit (BiGRU) based deep neural network model, whereas the second model (i.e. Model 2) is Embedding from Language Model (ELMo) based. It is to be noted that the use of deep learning to solve this problem in this particular setting is, in itself, very new. The technique, particularly the word attention mechanism, has not been tried for solving such a problem. Existing prior works for this problem mostly employ the methods that make use of handcrafted features. The proposed systems do not depend on hand crafted feature engineering or a sophisticated NLP pipeline, rather it is an end to end deep neural network architecture. Both the models outperform the state-of-the-art system.", "cite_spans": [ { "start": 344, "end": 369, "text": "(Jeffrey and Elisa, 2016)", "ref_id": "BIBREF10" }, { "start": 962, "end": 982, "text": "(Rubin et al., 2016)", "ref_id": "BIBREF20" }, { "start": 1012, "end": 1036, "text": "Politi-Fact (Wang, 2017)", "ref_id": null }, { "start": 1050, "end": 1070, "text": "(Popat et al., 2016)", "ref_id": "BIBREF18" }, { "start": 1149, "end": 1172, "text": "(Potthast et al., 2018)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A sufficient number of works could be found in the literature in fake news detection. Nowadays the detection of fake news is a hot area of research and gained much more research interest among the researchers. We could detect fake news at two levels, namely the conceptual level and operational level. defined that conceptually there are three types of fake news: viz i. Serious Fabrications ii. Hoaxes and iii. Satire. The work of Conroy et al. (2015) fostered linguistics and fact checking based approaches to distinguish between real and fake news, which could be considered as the work at conceptual level. described that fact-checking approach is a verification of hypothesis made in a news article to judge the truthfulness of a claim. Thorne et al. (2018) introduced a novel dataset for fact-checking and verification where evidence is large Wikipedia corpus. Few notable works which made use of text as evidence can be found in (Ferreira and Vlachos, 2016; Nie et al., 2018) . The Fake News Challenge 2 organized a competition to explore, how artificial intelligence technologies could be fostered to combat fake news. Almost 50 participants were participated and submitted their systems. Hanselowski et al. (2018) performed retrospective analysis of the three best participating systems of the Fake News Challenge. The work of Saikh et al. (2019) detected fake news through stance detection and also correlated this stance classification problem with Textual Entailment (TE). They tackled this problem using statistical machine learning and deep learning approaches separately and with combination of both of these. This system achieved the state of the art result. Another remarkable work in this line is the verification of a human-generated claim given the whole Wikipedia as evidence. The dataset, namely (Fact Extraction and Verification (FEVER)) proposed by Thorne et al. (2018) served this purpose. Few notable works in this line could be found in (Yin and Roth, 2018; Nie et al., 2019) .", "cite_spans": [ { "start": 742, "end": 762, "text": "Thorne et al. (2018)", "ref_id": "BIBREF24" }, { "start": 936, "end": 964, "text": "(Ferreira and Vlachos, 2016;", "ref_id": "BIBREF7" }, { "start": 965, "end": 982, "text": "Nie et al., 2018)", "ref_id": "BIBREF13" }, { "start": 1197, "end": 1222, "text": "Hanselowski et al. (2018)", "ref_id": "BIBREF9" }, { "start": 1336, "end": 1355, "text": "Saikh et al. (2019)", "ref_id": "BIBREF22" }, { "start": 1873, "end": 1893, "text": "Thorne et al. (2018)", "ref_id": "BIBREF24" }, { "start": 1964, "end": 1984, "text": "(Yin and Roth, 2018;", "ref_id": "BIBREF31" }, { "start": 1985, "end": 2002, "text": "Nie et al., 2019)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We propose two deep Learning based models to address the problem of fake information detection in the multi-domain platform. In the following subsections, we will discuss the methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposed Methods", "sec_num": "3" }, { "text": "This model comprises of multiple layers as shown in the Figure 1 A. Embedding Layer: The embedding of each word is obtained using pre-trained fastText model 3 (Bojanowski et al., 2017) . FastText embedding model is an extended version of Word2Vec (Mikolov et al., 2013) . Word2Vec (predicts embedding of a word based on given context and vice-versa) and Glove (exploits count and word co-occurrence matrix to predict embedding of a word) (Pennington et al., 2014) both treat each word as an atomic entity. The fastText model produces embedding of each word by combining the embedding of each character n-gram of that word. The model works better on rare words and also produces embedding for out-of-vocabulary words, where Word2Vec and Golve both fail. In the multi-domain scenario vocabularies are from different domains and there is a high chance of existing different domain specific vocabularies. This is the reason for choosing the fastText word vector method. B. Encoding Layer: The representation of each word is further given to a bidirectional Gated Recurrent Units (GRUs) (Cho et al., 2014) model. GRU takes less parameter and resources compared to Long Short Term Memory (LSTM), training also is computationally efficient. The working principles of GRU obey the following equations:", "cite_spans": [ { "start": 159, "end": 184, "text": "(Bojanowski et al., 2017)", "ref_id": "BIBREF1" }, { "start": 247, "end": 269, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF12" }, { "start": 438, "end": 463, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF15" }, { "start": 1082, "end": 1100, "text": "(Cho et al., 2014)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 56, "end": 64, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Model 1", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "z = \u03b1(x t U z + s t\u22121 W z ) (1) r = \u03b1(x t U r + s t\u22121 W r ) (2) h = tanh(x t U h + r t \u2022 s t\u22121 W r ) (3) r = (1 \u2212 z) \u2022 h + z \u2022 s t\u22121", "eq_num": "(4)" } ], "section": "Model 1", "sec_num": "3.1" }, { "text": "In equation 1, z is the update gate at time step t. This z is the summation of the multiplications of x t with it's own weight U(z) and s t\u22121 (holds the information of previous state) with it's own W(z). A sigmoid \u03b1 is applied on the summation to squeeze the result between 0 and 1. The task of this update gate (z) is to help the model to estimate how much of the previous information (from previous time steps) needs to be passed along to the future. In the equation 2, r is the reset gate, which is responsible for taking the decision of how much past information to forget. The calculation is same as the equation 1. The differences are in the weight and gate usages. The equation 3 performs as follows, i. multiply input x t with a weight U and s t\u22121 with a weight W. ii. Compute the element wise product between reset gate r t and s t\u22121 W. Then a non-linear activation function tanh is applied to the summation of i and ii. Finally, in the equation 4, we compute r which holds the information of the current unit. The computation procedure is as follows: i. compute element-wise multiplication to the update gate z t and s (t\u22121) . ii. calculate element-wise multiplication to (1-z) with h. Take the summation of i and ii. The bidirectional GRUs consists of the forward GRU, which reads the sentence from the first word (w 1 ) to the last word (w L ) and the backward GRU, that reads in reverse direction. We concatenate the representation of each word obtained from both the passes. C. Word Level Attention: We apply the attention model at word level (Bahdanau et al., 2015; Xu et al., 2015) . The objective is to let the model decide which words are importance compared to other words while predicting the target class (fake/legit). We apply this as applied in Yang et al. (2016) . The diagram is shown in the Figure 2. We take the aggregation of those words' representation which are multiplied with attention weight to get sentence representation. We do this process for both the news topic and the corresponding document. This particular technique of the word attention mechanism, has not been tried for solving such a problem.", "cite_spans": [ { "start": 1557, "end": 1580, "text": "(Bahdanau et al., 2015;", "ref_id": "BIBREF0" }, { "start": 1581, "end": 1597, "text": "Xu et al., 2015)", "ref_id": "BIBREF27" }, { "start": 1768, "end": 1786, "text": "Yang et al. (2016)", "ref_id": "BIBREF29" } ], "ref_spans": [ { "start": 1817, "end": 1823, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Model 1", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "U it = tanh(W w h it + b w )", "eq_num": "(5)" } ], "section": "Model 1", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b1 it = exp(u T it u w ) t exp(u T it u w )", "eq_num": "(6)" } ], "section": "Model 1", "sec_num": "3.1" }, { "text": "s i = t \u03b1 it h it (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model 1", "sec_num": "3.1" }, { "text": "First get the word annotation h it through GRU output and compute u it as a hidden representation of h it in 5. We measure the importance of the word as the similarity of u it with a word level context vector u w and get a normalized importance weight \u03b1 it through a softmax in 6. After that, in 7, we compute the sentence vector s i as a weighted sum of the word annotations based on the weights \u03b1 it . The word context vector u w is randomly initialized and jointly learned during the training process. D. Multi-Layer Perceptron: We concatenate the sentence vector obtained for both the inputs. The obtained vector further fed into fully connected layers. We use 512, 256, 128, 50 and 10 neurons, respectively, for five such layers with ReLU (Glorot et al., 2011) activation in each layer. Between each such layer, we employ 20% dropout (Srivastava et al., 2014) as a measurement of regularization. Finally, the output from the last fully connected layer is fed into a final classification layer with softmax (Duan et al., 2003) activation function having 2 neurons. We use Adam (Kingma and Ba, 2014) optimizer for optimization.", "cite_spans": [ { "start": 1011, "end": 1030, "text": "(Duan et al., 2003)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Model 1", "sec_num": "3.1" }, { "text": "We propose another approach whose embedding layer is based on Embedding for Language Model (ELMo) (Peters et al., 2018) and the MLP Network, which is same as we applied in Model 1. The diagram of this model is shown in the Figure 3 .", "cite_spans": [ { "start": 98, "end": 119, "text": "(Peters et al., 2018)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 223, "end": 232, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Model 2", "sec_num": "3.2" }, { "text": "Embedding Layer: Embedding from Language Model (ELMo) has several advantages over the other word vector methods, and found to be a good performer in many challenging NLP problems. It has key features like i. Contextual i.e. representation of each word is based on entire corpus in which it is used ii. Deep i.e. it combines all layers of a deep pre-trained neural network and iii. Character based i.e. it provides representations which are based on character, thus allowing the network to make use of morphological clues to form robust representation of out-of-vocabulary tokens during training. The ELMO embedding is very efficient in capturing context. The multi-domain datasets are having different vocabularies and contexts, so we make use of such a word vector representation method to capture the context. News topics and corresponding documents are given to Elmo Embedding model. This embedding layer produces the representation for news topic and news content.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model 2", "sec_num": "3.2" }, { "text": "After getting the embedding of the topic and the context, we merge them. The merged vector is fed into a five layers MLP (same as the previous model). Finally, we classify with a final layer having softmax activation function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model 2", "sec_num": "3.2" }, { "text": "Overall we perform four sets of experiments. In the following sub-sections we describe and analyze them one by one after the description of the datasets used. Data: Prior datasets and focus of research for fake information detection are on political domain. As our research focus is on multiple domains, we foster the dataset released by P\u00e9rez-Rosas et al. (2018). They released two novel datasets, namely FakeNews AMT and Celebrity. The Fake News AMT is collected via crowdsourcing (Amazon Mechanical Turk (AMT)) which covers news of six domains (i.e. Technology, Education, Business, Sports, Politics, and Entertainment). The Celebrity dataset is crawled directly from the web of celebrity gossips. It covers celebrity news. The AMT manually generated fake version of a news based on the real news. We extract the data domain wise to get the statistics of the dataset. It is observed that each domain contains equal number of instances (i.e. 80). The class distribution among each domain is also evenly distributed. The statistics of these two datasets is shown in the following Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 1081, "end": 1088, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experiments, Results and Discussion and Comparison with State-of-the-Art", "sec_num": "4" }, { "text": "The news of the Fake News AMT dataset was obtained from a variety of mainstream news websites predominantly in the United States such as the ABCNews, CNN, USAToday, New York Times, FoxNews, Bloomberg, and CNET among others.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments, Results and Discussion and Comparison with State-of-the-Art", "sec_num": "4" }, { "text": "Multi-Domain Analysis: In this section, we do experiments on whole Fake News AMT and Celebrity datasets individually. We train our models on the whole Fake News AMT and Celebrity dataset and test on the respective test set. As the datasets is evenly distributed between real and fake news item, a random baseline of 50% could be assumed as reference. The results obtained by the two proposed methods outperform the baseline and the results of P\u00e9rez-Rosas et al. (2018) . The results obtained and comparisons are shown in the Table 2 . Our results indicate this task could be efficiently handled using deep learning approach. Cross-Domain Analyses: We perform another set of experiment to study the usefulness of the best performing system (i.e. Model2 ) across the domains. We train the model2, on FakeNews AMT and test on Celebrity and vice-versa. The results are shown in the Table 3 . If we compare with the in domain results it is observed that there is a significant drop. This drop also observed in the work of P\u00e9rez-Rosas et al. (2018) in machine learning setting. This indicates there is a significant role of a domain in fake news detection, as it is established by our deep learning guided experiments too. Multi-Domain Training and Domain-wise Test-ing: There are very small number of examples pairs in each sub-domain (i.e. Business, Technology etc) in FakeNews AMT dataset. We combine the examples pairs of multiple domains/genres for cross corpus utilization. We train our proposed models on the combined dataset of five out of six available domains and test on the remaining one. This has been performed to see how the model which is trained on heterogeneous data react on the domain to which the model was not exposed at the time of training. The results are shown in Exp. a part of the Table 4 . Both the models yield the best accuracy in the Education domain, which indicates this domain is open i.e. linguistics properties, vocabularies of this domain are quite similar to other domains. The models (i.e. Model 1 and 2) perform worst in the Entertainment and the Sports, respectively, which indicate these two domains are diverse in nature from the others in terms of linguistics properties, writing style, vocabularies etc. Domain-wise Training and Domain-wise Testing: We also eager to see in-domain effect of our systems. The FakeNews AMT dataset comprises of six separate domains. We train and test our models, on each domain's dataset of Fake News AMT. This evaluates our model's performance domain-wise. The results of this experiment are shown in the Exp. b part of the Table 4 . In this case both the models produce the highest accuracy in the Sports domain, followed by the Entertainment, as we have shown in our previous experiments that these two domains are diverse in nature from the others. This fact is established by this experiment too. Both the models produce the lowest result in the Technology and the Business domain, respectively.", "cite_spans": [ { "start": 443, "end": 468, "text": "P\u00e9rez-Rosas et al. (2018)", "ref_id": "BIBREF16" }, { "start": 1017, "end": 1042, "text": "P\u00e9rez-Rosas et al. (2018)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 525, "end": 532, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 878, "end": 885, "text": "Table 3", "ref_id": "TABREF2" }, { "start": 1803, "end": 1810, "text": "Table 4", "ref_id": "TABREF6" }, { "start": 2596, "end": 2604, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Experiments, Results and Discussion and Comparison with State-of-the-Art", "sec_num": "4" }, { "text": "Visualization of Word Level Attention: We take the visualization of the topic and the corresponding document at word level attention as shown in the Figure 4 and 5, respectively. The aim is to visualize the words which are assigned more weights during the prediction of the output class. In these Figures, words with more deeper colour indicate that they are getting more attention. We can observe, the words secretary, education in 4 and President, Donald in 5 are the words having deeper colour, i.e. these words are getting more weight compared to others. These words are Named Entities (NEs). It could be concluded that NEs phrases are important in fake news detection in multi domain setting.", "cite_spans": [], "ref_spans": [ { "start": 149, "end": 157, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Experiments, Results and Discussion and Comparison with State-of-the-Art", "sec_num": "4" }, { "text": "We extract the mis-classified and also the truly classified instances produced by the best performing system. We perform a rigorous analysis of these instances and try to find out the pattern in the mis-classified instances and the linguistics differences between those two categories of instances. It is found that the model fails mostly in the Entertainment followed by the sports and the Business domain etc. To name a few, we are showing such examples which are actually \"Legitimate\", but predicted as \"Fake\" the Table 5 and which are actually \"Fake\", but predicted as \"Legitimate\" the Table 6 . It is observed that both the topic and document are having ample number of NEs. It needs further investigation in this font.", "cite_spans": [], "ref_spans": [ { "start": 517, "end": 524, "text": "Table 5", "ref_id": null }, { "start": 590, "end": 597, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "4.1" }, { "text": "In this article, we propose two deep learning based approaches to mitigate the problem of fake news detection from multiple domains platform. Antecedent works in this line pay attention on satir- ical news or made use of the content of the factchecking websites, which was restricted to one domain (i.e. politics). To address these limitations, we focus to extend this problem into multi domain scenario. Our work extends the concept of fake news detection from uni-domain to multi-domain, thus making it more general and realistic. We evaluate our proposed models on the datasets whose contents are from multiple domains. Our two proposed approaches outperform the existing models with a notable margin. Experiments also reveal that there is a vital role of a domain in context of fake news detection. We would like to do more deeper analysis of the role of domain for this problem in future. Apart from this our future line of research would be as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future work", "sec_num": "5" }, { "text": "\u2022 It would be interesting for this work to encode the domain information in the Deep Neural Nets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future work", "sec_num": "5" }, { "text": "\u2022 BERT (Devlin et al., 2019) and XLNet (Yang et al., 2019) embedding based model and make a comparison with fastText and ELMo based models in the context of fake news detection.", "cite_spans": [ { "start": 7, "end": 28, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF5" }, { "start": 39, "end": 58, "text": "(Yang et al., 2019)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future work", "sec_num": "5" }, { "text": "\u2022 Use of transfer learning and injection external knowledge for better understanding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future work", "sec_num": "5" }, { "text": "\u2022 Handling of Named Entities efficiently and incorporate their embedding with the normal phrases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future work", "sec_num": "5" }, { "text": "\u2022 Using WordNet to retrieve connections between words on the basis of semantics in the news corpora (both topic and document of news) which may influence in detection of Fake News. The selling point is to make budgeting and bookkeeping easier. But the data is also being used to offer new kinds of loans and investment products. Now banks have decided they aren't letting the data go without a fight. In recent weeks several large banks have been pushing to restrict the sharing of this kind of data with technology companies according to the tech firms. In some cases they are refusing to pass along information like the fees and interest rates they charge. Both sides see big money to be made from the reams of highly personal information created by financial transactions. Table 5 : Examples of mis-classified instances from Entertainment and Business domain. Examples are originally \"Legitimate\" but predicted as \"Fake\".", "cite_spans": [], "ref_spans": [ { "start": 776, "end": 783, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Conclusion and Future work", "sec_num": "5" }, { "text": "Slaven Bilic still has no support of West Ham's owners \"West Ham's owners have no faith in manager Slaven Bilic as his team won only six of their 11 games this year according to Sky sources. Bilic's contract runs out in the summer of 2018 and results have made it likely that he will not be offered a new deal this summer. Co-chairman David Sullivan told supporters 10 days ago after West Ham lost 3-2 at home to Leicester City. Sullivan said that even if performances and results improved in the next three games against Hull City Arsenal and Swansea City. West Ham's owners have a track record of being unloyal to their managers who don't meet their specs and there is a acceptance at boardroom level that Bilic has failed to prove a solid season.\" Education STEM Students Create Winning Invention STREAMWOOD, Ill. (AP) -A group of Streamwood High School students have created an invention that is exciting homeowners everywhere -and worrying electricity companies at the same time. The kids competed in the Samsung Solve for Tomorrow contest, entering and winning with a new solar panel that costs about $100 but can power an entire home -no roof takeover needed! The contest won the state-level competition which encourages teachers and students to solve real-world issues using science and math skills; the 16 studens will now compete in a national competition and, if successful, could win a prize of up to $200,000. Table 6 : Examples of mis-classified instances from the Sports and Education domain. Examples are originally \"Fake\" but predicted as \"Legitimate\" .", "cite_spans": [], "ref_spans": [ { "start": 1423, "end": 1430, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Sports", "sec_num": null }, { "text": "https://www.cnbc.com/2018/12/10/social-media-morepopular-than-newspapers-for-news-pew.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.fakenewschallenge.org/ 3 https://fasttext.cc/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Neural Machine Translation by Jointly Learning to Align and Translate. International Conference on Learning Representation (ICLR", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. International Con- ference on Learning Representation (ICLR).", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Enriching Word Vectors with Subword Information", "authors": [ { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "135--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching Word Vectors with Subword Information. Transactions of the As- sociation for Computational Linguistics, 5:135-146.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "News in an Online World: The Need for an automatic crap detector", "authors": [ { "first": "Yimin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Victoria", "middle": [ "L" ], "last": "Conroy", "suffix": "" }, { "first": "", "middle": [], "last": "Rubin", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Association for Information Science and Technology", "volume": "52", "issue": "", "pages": "1--4", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yimin Chen, Niall J Conroy, and Victoria L Rubin. 2015. News in an Online World: The Need for an automatic crap detector. Proceedings of the As- sociation for Information Science and Technology, 52(1):1-4.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation", "authors": [ { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Bart", "middle": [], "last": "Van Merrienboer", "suffix": "" }, { "first": "Fethi", "middle": [], "last": "Aglar G\u00fcl\u00e7ehre", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Bougares", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kyunghyun Cho, Bart van Merrienboer, \u00c7 aglar G\u00fcl\u00e7ehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representa- tions using RNN encoder-decoder for statistical ma- chine translation. CoRR, abs/1406.1078.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Automatic Deception Detection: Methods for Finding Fake News", "authors": [ { "first": "J", "middle": [], "last": "Niall", "suffix": "" }, { "first": "Victoria", "middle": [ "L" ], "last": "Conroy", "suffix": "" }, { "first": "Yimin", "middle": [], "last": "Rubin", "suffix": "" }, { "first": "", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Association for Information Science and Technology", "volume": "52", "issue": "", "pages": "1--4", "other_ids": {}, "num": null, "urls": [], "raw_text": "Niall J Conroy, Victoria L Rubin, and Yimin Chen. 2015. Automatic Deception Detection: Methods for Finding Fake News. Proceedings of the Association for Information Science and Technology, 52(1):1-4.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Multi-Category Classification by Soft-max Combination of Binary Classifiers", "authors": [ { "first": "S", "middle": [], "last": "Kaibo Duan", "suffix": "" }, { "first": "", "middle": [], "last": "Sathiya", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Keerthi", "suffix": "" }, { "first": "", "middle": [], "last": "Chu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 4th International Conference on Multiple Classifier Systems, MCS'03", "volume": "", "issue": "", "pages": "125--134", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kaibo Duan, S. Sathiya Keerthi, Wei Chu, Shirish Kr- ishnaj Shevade, and Aun Neow Poo. 2003. Multi- Category Classification by Soft-max Combination of Binary Classifiers. In Proceedings of the 4th In- ternational Conference on Multiple Classifier Sys- tems, MCS'03, pages 125-134, Berlin, Heidelberg. Springer-Verlag.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Emergent: a Novel Data-set for Stance Classification", "authors": [ { "first": "William", "middle": [], "last": "Ferreira", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Vlachos", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1163--1168", "other_ids": { "DOI": [ "10.18653/v1/N16-1138" ] }, "num": null, "urls": [], "raw_text": "William Ferreira and Andreas Vlachos. 2016. Emer- gent: a Novel Data-set for Stance Classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 1163-1168, San Diego, California. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Deep Sparse Rectifier Neural Networks", "authors": [ { "first": "Xavier", "middle": [], "last": "Glorot", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the fourteenth international conference on artificial intelligence and statistics", "volume": "", "issue": "", "pages": "315--323", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Deep Sparse Rectifier Neural Networks. In Proceedings of the fourteenth international confer- ence on artificial intelligence and statistics, pages 315-323.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A Retrospective Analysis of the Fake News Challenge Stance-Detection Task", "authors": [ { "first": "Andreas", "middle": [], "last": "Hanselowski", "suffix": "" }, { "first": "Pvs", "middle": [], "last": "Avinesh", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Schiller", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Caspelherr", "suffix": "" }, { "first": "Debanjan", "middle": [], "last": "Chaudhuri", "suffix": "" }, { "first": "Christian", "middle": [ "M" ], "last": "Meyer", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1859--1874", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andreas Hanselowski, Avinesh PVS, Benjamin Schiller, Felix Caspelherr, Debanjan Chaudhuri, Christian M. Meyer, and Iryna Gurevych. 2018. A Retrospective Analysis of the Fake News Challenge Stance-Detection Task. In Proceedings of the 27th International Conference on Computational Lin- guistics, pages 1859-1874, Santa Fe, New Mexico, USA. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "News use Across Social Media Platforms", "authors": [ { "first": "Gottfried", "middle": [], "last": "Jeffrey", "suffix": "" }, { "first": "Shearer", "middle": [], "last": "Elisa", "suffix": "" } ], "year": 2016, "venue": "In In Pew Research Center Reports", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gottfried Jeffrey and Shearer Elisa. 2016. News use Across Social Media Platforms 2016. In In Pew Re- search Center Reports.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Adam: A Method for Stochastic Optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1412.6980" ] }, "num": null, "urls": [], "raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. arXiv preprint arXiv:1412.6980.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Distributed Representations of Words and Phrases and their Compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in Neural Information Processing Systems", "volume": "26", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed Representa- tions of Words and Phrases and their Composition- ality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Ad- vances in Neural Information Processing Systems 26, pages 3111-3119. Curran Associates, Inc.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Combining Fact Extraction and Verification with Neural Semantic Matching Networks", "authors": [ { "first": "Yixin", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Haonan", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1811.07039" ] }, "num": null, "urls": [], "raw_text": "Yixin Nie, Haonan Chen, and Mohit Bansal. 2018. Combining Fact Extraction and Verification with Neural Semantic Matching Networks. arXiv preprint arXiv:1811.07039.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Combining fact extraction and verification with neural semantic matching networks", "authors": [ { "first": "Yixin", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Haonan", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2019, "venue": "The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019", "volume": "", "issue": "", "pages": "6859--6866", "other_ids": { "DOI": [ "10.1609/aaai.v33i01.33016859" ] }, "num": null, "urls": [], "raw_text": "Yixin Nie, Haonan Chen, and Mohit Bansal. 2019. Combining fact extraction and verification with neu- ral semantic matching networks. In The Thirty- Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Ad- vances in Artificial Intelligence, EAAI 2019, Hon- olulu, Hawaii, USA, January 27 -February 1, 2019, pages 6859-6866.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Glove: Global Vectors for Word Representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": { "DOI": [ "10.3115/v1/D14-1162" ] }, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global Vectors for Word Representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Automatic Detection of Fake News", "authors": [ { "first": "Ver\u00f3nica", "middle": [], "last": "P\u00e9rez-Rosas", "suffix": "" }, { "first": "Bennett", "middle": [], "last": "Kleinberg", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Lefevre", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "3391--3401", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ver\u00f3nica P\u00e9rez-Rosas, Bennett Kleinberg, Alexandra Lefevre, and Rada Mihalcea. 2018. Automatic De- tection of Fake News. In Proceedings of the 27th In- ternational Conference on Computational Linguis- tics, pages 3391-3401, Santa Fe, New Mexico, USA. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Deep Contextualized Word Representations", "authors": [ { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2227--2237", "other_ids": { "DOI": [ "10.18653/v1/N18-1202" ] }, "num": null, "urls": [], "raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Representations. In Proceedings of the 2018 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Credibility Assessment of Textual Claims on the Web", "authors": [ { "first": "Kashyap", "middle": [], "last": "Popat", "suffix": "" }, { "first": "Subhabrata", "middle": [], "last": "Mukherjee", "suffix": "" }, { "first": "Jannik", "middle": [], "last": "Str\u00f6tgen", "suffix": "" }, { "first": "Gerhard", "middle": [], "last": "Weikum", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 25th ACM International Conference on Information and Knowledge Management, CIKM 2016", "volume": "", "issue": "", "pages": "2173--2178", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kashyap Popat, Subhabrata Mukherjee, Jannik Str\u00f6tgen, and Gerhard Weikum. 2016. Credibility Assessment of Textual Claims on the Web. In Pro- ceedings of the 25th ACM International Conference on Information and Knowledge Management, CIKM 2016, Indianapolis, IN, USA, October 24-28, 2016, pages 2173-2178.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A Stylometric Inquiry into Hyperpartisan and Fake News", "authors": [ { "first": "Martin", "middle": [], "last": "Potthast", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Kiesel", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Reinartz", "suffix": "" }, { "first": "Janek", "middle": [], "last": "Bevendorff", "suffix": "" }, { "first": "Benno", "middle": [], "last": "Stein", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "231--240", "other_ids": { "DOI": [ "10.18653/v1/P18-1022" ] }, "num": null, "urls": [], "raw_text": "Martin Potthast, Johannes Kiesel, Kevin Reinartz, Janek Bevendorff, and Benno Stein. 2018. A Sty- lometric Inquiry into Hyperpartisan and Fake News. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 231-240, Melbourne, Aus- tralia. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Fake News or Truth? using Satirical Cues to Detect Potentially Misleading News", "authors": [ { "first": "Victoria", "middle": [], "last": "Rubin", "suffix": "" }, { "first": "Niall", "middle": [], "last": "Conroy", "suffix": "" }, { "first": "Yimin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Sarah", "middle": [], "last": "Cornwell", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the second workshop on computational approaches to deception detection", "volume": "", "issue": "", "pages": "7--17", "other_ids": {}, "num": null, "urls": [], "raw_text": "Victoria Rubin, Niall Conroy, Yimin Chen, and Sarah Cornwell. 2016. Fake News or Truth? using Satiri- cal Cues to Detect Potentially Misleading News. In Proceedings of the second workshop on computa- tional approaches to deception detection, pages 7- 17.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Deception Detection for News: Three Types of Fakes", "authors": [ { "first": "Victoria", "middle": [ "L" ], "last": "Rubin", "suffix": "" }, { "first": "Yimin", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Niall", "middle": [ "J" ], "last": "Conroy", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 78th ASIS&T Annual Meeting: Information Science with Impact: Research in and for the Community, ASIST '15", "volume": "83", "issue": "", "pages": "1--83", "other_ids": {}, "num": null, "urls": [], "raw_text": "Victoria L. Rubin, Yimin Chen, and Niall J. Conroy. 2015. Deception Detection for News: Three Types of Fakes. In Proceedings of the 78th ASIS&T An- nual Meeting: Information Science with Impact: Re- search in and for the Community, ASIST '15, pages 83:1-83:4, Silver Springs, MD, USA. American So- ciety for Information Science.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A Novel Approach Towards Fake News Detection: Deep Learning Augmented with Textual Entailment Features", "authors": [ { "first": "Tanik", "middle": [], "last": "Saikh", "suffix": "" }, { "first": "Amit", "middle": [], "last": "Anand", "suffix": "" }, { "first": "Asif", "middle": [], "last": "Ekbal", "suffix": "" }, { "first": "Pushpak", "middle": [], "last": "Bhattacharyya", "suffix": "" } ], "year": 2019, "venue": "International Conference on Applications of Natural Language to Information Systems", "volume": "", "issue": "", "pages": "345--358", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tanik Saikh, Amit Anand, Asif Ekbal, and Pushpak Bhattacharyya. 2019. A Novel Approach Towards Fake News Detection: Deep Learning Augmented with Textual Entailment Features. In International Conference on Applications of Natural Language to Information Systems, pages 345-358, Salford, UK. Springer.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Dropout: a Simple Way to Prevent Neural Networks from Overfitting", "authors": [ { "first": "Nitish", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Krizhevsky", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2014, "venue": "The Journal of Machine Learning Research", "volume": "15", "issue": "1", "pages": "1929--1958", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a Simple Way to Prevent Neural Networks from Overfitting. The Journal of Machine Learning Research, 15(1):1929-1958.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "FEVER: a Large-Scale Dataset for Fact Extraction and Verification", "authors": [ { "first": "James", "middle": [], "last": "Thorne", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Vlachos", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/N18-1074" ] }, "num": null, "urls": [], "raw_text": "James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a Large-Scale Dataset for Fact Extraction and Verification. In Proceedings of the 2018", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "authors": [], "year": null, "venue": "", "volume": "1", "issue": "", "pages": "809--819", "other_ids": {}, "num": null, "urls": [], "raw_text": "Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809-819, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "liar, liar pants on fire\": A new benchmark dataset for fake news detection", "authors": [ { "first": "William", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Wang", "middle": [], "last": "", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "422--426", "other_ids": { "DOI": [ "10.18653/v1/P17-2067" ] }, "num": null, "urls": [], "raw_text": "William Yang Wang. 2017. \"liar, liar pants on fire\": A new benchmark dataset for fake news detection. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics (Volume 2: Short Papers), pages 422-426, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Show, attend and tell: Neural Image Caption Generation with Visual Attention", "authors": [ { "first": "Kelvin", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Ba", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Kiros", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Courville", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhudinov", "suffix": "" }, { "first": "Rich", "middle": [], "last": "Zemel", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "International conference on machine learning", "volume": "", "issue": "", "pages": "2048--2057", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural Image Caption Generation with Visual At- tention. In International conference on machine learning, pages 2048-2057.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding", "authors": [ { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zihang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Le", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1906.08237" ] }, "num": null, "urls": [], "raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized Autoregressive Pretrain- ing for Language Understanding. arXiv preprint arXiv:1906.08237.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Hierarchical attention networks for document classification", "authors": [ { "first": "Zichao", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Diyi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchi- cal attention networks for document classification.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "1480--1489", "other_ids": {}, "num": null, "urls": [], "raw_text": "In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 1480-1489.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "TwoWingOS: A two-wing optimization strategy for evidential claim verification", "authors": [ { "first": "Wenpeng", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "105--114", "other_ids": { "DOI": [ "10.18653/v1/D18-1010" ] }, "num": null, "urls": [], "raw_text": "Wenpeng Yin and Dan Roth. 2018. TwoWingOS: A two-wing optimization strategy for evidential claim verification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Pro- cessing, pages 105-114, Brussels, Belgium. Asso- ciation for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "text": ". The layers are A. Embedding Layer B. Encoding Layer (Bi-GRU) C. Word level Attention D. Multi-layer Perceptron (MLP).", "num": null }, "FIGREF1": { "type_str": "figure", "uris": null, "text": "Architectural Diagram of the Proposed First System", "num": null }, "FIGREF2": { "type_str": "figure", "uris": null, "text": "Word Level Attention Network", "num": null }, "FIGREF3": { "type_str": "figure", "uris": null, "text": "Word level Attention on News Document, A Part of it is Shown Due to Space Constraint.", "num": null }, "TABREF1": { "text": "Class Distribution and Word Statistics for Fake News AMT and Celebrity Datasets. Avg: Average, sent: Sentence", "num": null, "content": "", "html": null, "type_str": "table" }, "TABREF2": { "text": "Architectural Diagram of the Proposed Second Model", "num": null, "content": "
DatasetSystemModelTest Accuracy(%)
FakeNews AMTProposedModel1 Model277.08 83.3
(P\u00e9rez-Rosas et al., 2018)Linear SVM74
CelebrityProposedModel1 Model276.53 79
(P\u00e9rez-Rosas et al., 2018)Linear SVM76
", "html": null, "type_str": "table" }, "TABREF3": { "text": "", "num": null, "content": "
: Classification Results for the FakeNews AMT
and Celebrity News Dataset with Two Proposed Meth-
ods and Comparison with Previous Results
TrainingTestingAccuracy(%)
FakeNewsAMTCelebrity54.3
CelebrityFakeNewsAMT68.5
", "html": null, "type_str": "table" }, "TABREF4": { "text": "", "num": null, "content": "", "html": null, "type_str": "table" }, "TABREF6": { "text": "", "num": null, "content": "
: Result of Exp. a (Trained on Multi-domain
Data and Tested on Domain wise Data) and Exp. b
(Trained on Domain wise Data and Tested on Domain
wise Data)
Figure 4: Word Level Attention on News Topic
", "html": null, "type_str": "table" }, "TABREF7": { "text": "Big or small Chris Pratt has heard it all. These days the \"Guardians of the Galaxy\" star 37 is taking flak for being too thin but he's not taking it lying down. Pratt who has been documenting the healthy snacks he's eating", "num": null, "content": "
DomainTopicContent
while filming \"Jurassic World 2\" in a series of
EntertainmentChris Pratt responds to body shamers telling him he's too thin\"What's My Snack\" Instagram videos fired back -in his usual tongue-in-cheek manner -after some followers apparently suggested he looked too thin. \"So many
people have said I look too thin in my recent episodes of
#WHATSMYSNACK he wrote on Instagram Thursday.
Some have gone as far as to say I look 'skeletal.'
Well just because I am a male doesn't mean I'm
impervious to your whispers. Body shaming hurts.\"
The big banks and Silicon Valley are waging an escalating
battle over your personal financial data: your dinner bill last
night your monthly mortgage payment the interest rates you
pay. Technology companies like Mint and Betterment have
been eager to slurp up this data mainly by building services
that let people link all their various bank-account and
credit-card information.
BusinessBanks and Tech Firms Battle Over Something Akin to Gold: Your Data
6 Acknowledgement
", "html": null, "type_str": "table" } } } }