{ "paper_id": "O15-3003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:10:17.262683Z" }, "title": "Word Co-occurrence Augmented Topic Model in Short Text", "authors": [ { "first": "Guan-Bin", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Cheng Kung University", "location": {} }, "email": "gbchen@ikmlab.csie.ncku.edu.tw" }, { "first": "Hung-Yu", "middle": [], "last": "Kao", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Cheng Kung University", "location": {} }, "email": "hykao@mail.ncku.edu.tw" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The large amount of text on the Internet cause people hard to understand the meaning in a short limit time. Topic models (e.g. LDA and PLSA) has been proposed to summarize the long text into several topic terms. In the recent years, the short text media such as tweet is very popular. However, directly applies the transitional topic model on the short text corpus usually gating non-coherent topics. Because there is no enough words to discover the word co-occurrence pattern in a short document. The Bi-term topic model (BTM) has been proposed to improve this problem. However, BTM just consider simple bi-term frequency which cause the generated topics are dominated by common words. In this paper, we solve the problem of the frequent bi-term in BTM. Thus, we proposed an improvement of word co-occurrence method to enhance the topic models. We apply the word co-occurrence information to the BTM. The experimental result that show our PMI-\u03b2-BTM gets well result in the both of regular short news title text and the noisy tweet text. Moreover, there are two advantages in our method. We do not need any external data and our proposed methods are based on the original topic model that we did not modify the model itself, thus our methods can easily apply to some other existing BTM based models.", "pdf_parse": { "paper_id": "O15-3003", "_pdf_hash": "", "abstract": [ { "text": "The large amount of text on the Internet cause people hard to understand the meaning in a short limit time. Topic models (e.g. LDA and PLSA) has been proposed to summarize the long text into several topic terms. In the recent years, the short text media such as tweet is very popular. However, directly applies the transitional topic model on the short text corpus usually gating non-coherent topics. Because there is no enough words to discover the word co-occurrence pattern in a short document. The Bi-term topic model (BTM) has been proposed to improve this problem. However, BTM just consider simple bi-term frequency which cause the generated topics are dominated by common words. In this paper, we solve the problem of the frequent bi-term in BTM. Thus, we proposed an improvement of word co-occurrence method to enhance the topic models. We apply the word co-occurrence information to the BTM. The experimental result that show our PMI-\u03b2-BTM gets well result in the both of regular short news title text and the noisy tweet text. Moreover, there are two advantages in our method. We do not need any external data and our proposed methods are based on the original topic model that we did not modify the model itself, thus our methods can easily apply to some other existing BTM based models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "With the advancement of information and communication technology, the information we obtained is very abundant and multivariate. Especially, in the recent 15 years, many type of the Internet media grow up so that people can get large amount of the information in a short time. These internet media include Wikipedia, blogs and the recently popular social medial such as Twitter, Facebook et.al. Generally, the articles/documents in the Wikipedia, and blogs are usually the long text and have the complete content. While the short text social media, such as Twitter, become very popular in the recent years. The reason is that these short text social media provide a very convenient way to share the people feeling and thinking.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Generally, these Internet media deliver the people thinking by using the text. However, the large amount of text on the Internet cause people hard to understand the meaning in a short limit time. To solve the problem, many document summarization technologies have been proposed. Among them, topic models summarize the context in large amount of documents into several topic terms. By reading these topic terms, people will understand the content in a short time. Topic model can be performed by the vector space model or the probability model. In the recent years, the probability models such as Probabilistic Latent Semantic Analysis (pLSA) (Hofmann, 1999) and Latent Dirichlet Allocation (LDA) (Blei et al., 2003) are very popular because the probability models base on the document generation process. The inspirations of the document generation process come from the human written articles. When a person writes an article, he or she will inspire some thinking in mind, then extend these thinking into some related words. Finally, they write down these words to complete an article. Probability topic models simulate the behavior of above document generating process. In the view of the vectorization of the probability topic models, when we have a text corpus, we have known the documents and its words distribution by statistic the word vector. Then, the probability topic models split the document-word matrix into the document-topic and topic-word matrices. The distribution of the document-topic matrix describes that the degree of each document belongs each topic while the topic-word matrix describes the degree of each word belongs each topic. The \"topic\" in these two matrices is the latent factor as the human thinking.", "cite_spans": [ { "start": 642, "end": 657, "text": "(Hofmann, 1999)", "ref_id": "BIBREF0" }, { "start": 696, "end": 715, "text": "(Blei et al., 2003)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In essence, the topic models capture the word co-occurrence information and these highly co-occurrence words are put together to compose a topic (Divya et al., 2013; Mimno et al., 2011) . So, the key to find out high quality topics is that the corpus must contain a large amount of word co-occurrence information and the topic model has the ability to correctly capture the amount of the word co-occurrence. However, the traditional topic models work well in the long text corpus but work poorly in short text corpus. The reason is that the original intention of LDA is designed to model the long text corpus. Exactly, LDA capture the word co-occurrence in document-level (Divya et al., 2013; Yan et al., 2013) , but there are no enough words to well judge the word co-occurrence in document-level in a short text document. Figure 1 is an example which shows the difference of the topic model in between the long text and short text corpus. In the long text corpus, each document provides a lot of word co-occurrence information, so that LDA can well capture these information to discover the high quality topics. While in the short text document, there are no enough words in a single document to discover the word co-occurrence information.", "cite_spans": [ { "start": 145, "end": 165, "text": "(Divya et al., 2013;", "ref_id": "BIBREF2" }, { "start": 166, "end": 185, "text": "Mimno et al., 2011)", "ref_id": "BIBREF3" }, { "start": 672, "end": 692, "text": "(Divya et al., 2013;", "ref_id": "BIBREF2" }, { "start": 693, "end": 710, "text": "Yan et al., 2013)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 824, "end": 832, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "To overcome above problems in short text, many researchers consider a simpler topic model, mixture of unigrams model. Mixture of unigrams model samples topics in global corpus level (Nigam et al., 2000; Zhao et al., 2011) . More specifically, the word co-occurrence in document-level means that the amount of the word co-occurrence relation comes from a single document. On the contrary, the word co-occurrence in corpus-level means that the amount of the word co-occurrence relation comes from a full corpus which contains many documents. Mixture of unigrams overcomes the lack of words in the short text documents. Further, Xiaohui Yan et al. proposed the Bi-term Topic Model (BTM) (Yan et al., With the advancement of information and communication technology, the information we obtained is much abundant and multivariate.", "cite_spans": [ { "start": 182, "end": 202, "text": "(Nigam et al., 2000;", "ref_id": "BIBREF5" }, { "start": 203, "end": 221, "text": "Zhao et al., 2011)", "ref_id": "BIBREF6" }, { "start": 684, "end": 696, "text": "(Yan et al.,", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 1. An example of LDA in the long text and short text corpus", "sec_num": null }, { "text": "Especially in the recent years, many types of the Internet media grows up so that people can get large amount of the information in a short time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1. An example of LDA in the long text and short text corpus", "sec_num": null }, { "text": "With the advancement of information and communication technology, he information we obtained is much abundant and multivariate. Especially n the recent years, many types of the Internet media grows up so that eople can get large amount of the information in a short time. Cheng et al., 2014) which directly model the word co-occurrence and use the corpus-level bi-term to overcome the lack of the text information problem. A bi-term is an unordered word pair co-occurring in a short text document. The major advantage of BTM is that 1) BTM model the word co-occurrence by using the explicit bi-term, and 2) BTM aggregate these word co-occurrence patterns in the corpus for topic discovering (Yan et al., 2013; Cheng et al., 2014) . BTM abandons the document-level directly. A topic in BTM contains several bi-term and a bi-term crosses many documents. BTM emphasizes that the co-occurrence information comes from all bi-terms in whole corpus. However, BTM will make the common words be performed excessively because the frequency of bi-term comes from the whole corpus instead of a short document.", "cite_spans": [ { "start": 272, "end": 291, "text": "Cheng et al., 2014)", "ref_id": "BIBREF7" }, { "start": 691, "end": 709, "text": "(Yan et al., 2013;", "ref_id": "BIBREF4" }, { "start": 710, "end": 729, "text": "Cheng et al., 2014)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 1. An example of LDA in the long text and short text corpus", "sec_num": null }, { "text": "In this paper, we solve the frequent bi-term problem in BTM. We propose an approach base on BTM. For the problem in BTM, a simple and intuitive solution is to use pointwise mutual information (PMI) (Church & Hanks, 1990) to decrease the statistical amount of the frequent words in whole corpus. With respect to the frequency of bi-term, the PMI can normalize the score by each single word frequency in the bi-term. Otherwise, the priors in the topic models usually set symmetric. This symmetric priors mean that there is not any preference of words in any specific topic (Wallach et al., 2009) . An intuitive idea is that why not adopt some word co-occurrence information in priors to restrict the generated topics. Base on above two ideas, we propose a novel prior adjustment method, PMI-\u03b2 priors, which first use the PMI to mine the word co-occurrence from the whole corpus. Then, we transform such PMI scores to the priors of BTM. Figure 2 shows the graphical representation of the PMI-\u03b2-BTM.", "cite_spans": [ { "start": 198, "end": 220, "text": "(Church & Hanks, 1990)", "ref_id": "BIBREF8" }, { "start": 571, "end": 593, "text": "(Wallach et al., 2009)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 934, "end": 942, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Figure 2. The graphical representation of the PMI-\u03b2-BTM", "sec_num": null }, { "text": "In summary, the proposed approach enhance the amount of the word co-occurrence and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2. The graphical representation of the PMI-\u03b2-BTM", "sec_num": null }, { "text": "w i \uf05a w j \uf066 \uf066 \uf071 ... \uf061 ... w i \uf05a w j \uf062 ... \uf066 ... w i \uf05a w j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2. The graphical representation of the PMI-\u03b2-BTM", "sec_num": null }, { "text": "also based on the original topic model. Basing on the original topic model means we did not modify the model itself, thus our methods can easily apply to some other existing BTM based models, to overcome the short text problem without any modification. To test the performance of our two methods completely, we prepare two different types of short text corpus for the experiments. One is the tweet text and another is the news title. The context of news title dataset is regular and formal while the text in tweet usually contain many noise. Experimental results show our PMI-\u03b2 priors method is better than the BTM in both tweet and news title datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2. The graphical representation of the PMI-\u03b2-BTM", "sec_num": null }, { "text": "The remaining of this paper shows below. In Section 2, we show the survey of some traditional topic models and the previous works of topic model to overcome the short text. Section 3 shows our proposed PMI-\u03b2 priors and the re-organized document methods. The experiment results show in Section 4. Finally, we conclude this research in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2. The graphical representation of the PMI-\u03b2-BTM", "sec_num": null }, { "text": "Topic Model is a method to find out the hidden semantic topics from the observed documents in the text corpus. Topic Models have been researched several years. Generally, topic model can be performed by the vector space model or the probability model. The early one of the vector space topic model, Latent Semantic Analysis (LSA) (Landauer et al., 1998) , uses the singular value decomposition (SVD) to find out the latent topic. However, LSA does not model the polysemy well and the cost of SVD is very high (Hofmann, 1999; Blei et al., 2003) . Afterward, Thomas Hofmann proposed the one-document-multi-topics model, probabilistic Latent Semantic Analysis (pLSA) (Hofmann, 1999) . pLSA bases on the document generation process which like the human writing. However, the numerous parameters of pLSA cause the overfitting problem and pLSA does not define the generation of the unknown documents. In 2003, Blei et al. proposed a well-known Latent Dirichlet Allocation (LDA) (Blei et al., 2003) , LDA use the prior probability in Bayes theory to extents pLSA and simplify the parameters estimate process in pLSA. Also, the non-zero priors let LDA have the ability to infer the new documents.", "cite_spans": [ { "start": 330, "end": 353, "text": "(Landauer et al., 1998)", "ref_id": "BIBREF10" }, { "start": 509, "end": 524, "text": "(Hofmann, 1999;", "ref_id": "BIBREF0" }, { "start": 525, "end": 543, "text": "Blei et al., 2003)", "ref_id": "BIBREF1" }, { "start": 664, "end": 679, "text": "(Hofmann, 1999)", "ref_id": "BIBREF0" }, { "start": 972, "end": 991, "text": "(Blei et al., 2003)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "The Survey of the Traditional Topic Models for Normal Text", "sec_num": "2.1" }, { "text": "However, there are some drawbacks in LDA. First, LDA works under the bag-of-word model hypothesis. In the bag-of-word model, each word of the document is no order and independent of others (Wallach, 2006) . The hypothesis compared with the human writing behavior is unreasonable (Divya et al., 2013) . Second, LDA emphasizes the relations between topics are week, but actually, the topics may have hierarchical structure. Third, LDA requires the large number of articles and well-structured long articles to get the high quality topics. Apply LDA on the short text or uncompleted sentences corpus usually get poor results. The fourth drawback is that in spite of the LDA has the concept of the prior probabilities but LDA priors generally set the symmetric values in each prior vector, like <0.1> or <0.01>. The symmetric prior means no bias of each words in the specific topic (Wallach et al., 2009) . In this situation, the priors only provide the smooth technology to avoid the zero probability and the model only use the statistical information from the data to discover the hidden topics.", "cite_spans": [ { "start": 189, "end": 204, "text": "(Wallach, 2006)", "ref_id": "BIBREF11" }, { "start": 279, "end": 299, "text": "(Divya et al., 2013)", "ref_id": "BIBREF2" }, { "start": 878, "end": 900, "text": "(Wallach et al., 2009)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "The Survey of the Traditional Topic Models for Normal Text", "sec_num": "2.1" }, { "text": "To overcome above four drawbacks, many researchers propose new modify models. Such as N-gram Topic Model (Wang et al., 2007) and HMM-LDA (Griffiths et al., 2004) provide the context modeling. Wei Li et al. proposed the Pachinko Allocation Model (PAM) (Li & McCallum, 2006) which adds the super topic concept and make the topic have the hierarchical structure. Otherwise, Zhiyuan Chen et al. apply the must-link and cannot-link information to guide the document generation process which words must or not to be put into a topic (Chen & Liu, 2014) .", "cite_spans": [ { "start": 105, "end": 124, "text": "(Wang et al., 2007)", "ref_id": "BIBREF12" }, { "start": 137, "end": 161, "text": "(Griffiths et al., 2004)", "ref_id": "BIBREF13" }, { "start": 251, "end": 272, "text": "(Li & McCallum, 2006)", "ref_id": "BIBREF14" }, { "start": 527, "end": 545, "text": "(Chen & Liu, 2014)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "The Survey of the Traditional Topic Models for Normal Text", "sec_num": "2.1" }, { "text": "With the rise of social media in recent years, topic models have been utilized for social media analysis. For example, some researches apply topic models in social media for event tracking (Lin et al., 2010) , content characterizing (Zhao et al., 2011; Ramage et al., 2010) , and content recommendation (Chen et al., 2010; Phelan et al., 2009) . However, to share people thinking conveniently, the context is usually short. These short text contexts make topic models hard to discover the amount of word co-occurrence. For the short text corpus, there are three directions to overcome the insufficient of the word co-occurrence problem. One is using the external resources to guide the model generation, another is aggregating several short texts into a long text, and the other is improving the model to satisfy the short text properties. For the first direction, Phan et al. (Phan et al., 2008) proposed a framework that adopt the large external resources (such as Wiki and blog) to deal with the data sparsity problem. R.Z. Michal et al. proposed an author topic model (Rosen-Zvi et al., 2004) which adopt the user information and make the model suitable for specific users. Jin et al. proposed the Dual-LDA model (Jin et al., 2011) , it use not only the short text corpus but also the related long text corpus to generate topics, respectively. The generation process use the long text to help the short text modeling. If the quality of the external long text or knowledge base is high, the generated topic quality will be improve. However, we cannot always obtain the related long text to guide short text and the related long text is very domain specific. So, using external resources is not suitable for the general short text dataset. In addition to adopt the long text, Hong et al. aggregate the tweets which shared the same words and get better results than the original tweet text (Hong & Davison, 2010 ).", "cite_spans": [ { "start": 189, "end": 207, "text": "(Lin et al., 2010)", "ref_id": "BIBREF16" }, { "start": 233, "end": 252, "text": "(Zhao et al., 2011;", "ref_id": "BIBREF6" }, { "start": 253, "end": 273, "text": "Ramage et al., 2010)", "ref_id": "BIBREF17" }, { "start": 303, "end": 322, "text": "(Chen et al., 2010;", "ref_id": "BIBREF18" }, { "start": 323, "end": 343, "text": "Phelan et al., 2009)", "ref_id": "BIBREF19" }, { "start": 877, "end": 896, "text": "(Phan et al., 2008)", "ref_id": "BIBREF20" }, { "start": 1072, "end": 1096, "text": "(Rosen-Zvi et al., 2004)", "ref_id": "BIBREF21" }, { "start": 1217, "end": 1235, "text": "(Jin et al., 2011)", "ref_id": "BIBREF22" }, { "start": 1891, "end": 1912, "text": "(Hong & Davison, 2010", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Topic Models for Short Text", "sec_num": "2.2" }, { "text": "For the model improvement, Wayne et al. use the mixture of unigrams model to model the tweets topics from whole corpus text (Zhao et al., 2011) . Their experimental results verify that the mixture of unigram model can discover more coherent topics than LDA in the short text corpus. Further, Xiaohui Yan et al. proposed the Bi-term Topic Model (BTM) (Yan et al., 2013; Cheng et al., 2014) which directly model the word co-occurrence and use the corpus level bi-term to overcome the lack of the text information problem. A bi-term is a word pair containing a co-occur relation in this two words. The advantage is that BTM can model the general text without any domain specific external data. Comparing with the mixture of unigram, BTM is a special case of the mixture of unigram. They both model the corpus level topic but BTM generates two words (bi-term) every time the generation process. However, BTM discovers the word co-occurrence just by considering the bi-term frequency. The bi-term frequency will be failed to judge the word co-occurrence when the bi-term frequency is high but one of the frequency of two words in a bi-term is high and another is low.", "cite_spans": [ { "start": 124, "end": 143, "text": "(Zhao et al., 2011)", "ref_id": "BIBREF6" }, { "start": 350, "end": 368, "text": "(Yan et al., 2013;", "ref_id": "BIBREF4" }, { "start": 369, "end": 388, "text": "Cheng et al., 2014)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Topic Models for Short Text", "sec_num": "2.2" }, { "text": "Topic models learn topics base on the amount of the word co-occurrence in the documents. The word co-occurrence is a degree which describes how often the two words appear together. BTM, discovers topics from bi-terms in the whole corpus to overcome the lack of local word co-occurrence information. However, BTM will make the common words be performed excessively because BTM identifies the word co-occurrence information by the bi-term frequency in corpus-level. Thus, we propose a PMI-\u03b2 priors methods on BTM. Our PMI-\u03b2 priors method can adjust the co-occurrence score to prevent the common words problem. Next, we will describe the detail of our method of PMI-\u03b2 priors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Word Co-occurrence Augmented Methods", "sec_num": "3." }, { "text": "We first describe the detail of BTM. First, we introduce the notation of \"bi-term\". Bi-term is the word pair co-occurring in the short text. Any two distinct words in a document construct a bi-term. For example, a document with three terms will generate three bi-term (Yan et al., 2013) :", "cite_spans": [ { "start": 268, "end": 286, "text": "(Yan et al., 2013)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "The Word Co-occurrence Augmented Methods", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\uf028 \uf029 \uf028 \uf029 \uf028 \uf029 \uf028 \uf029 \uf07b \uf07d 1 2 3 1 2 2 3 1 3 , , , , , , , t t t t t t t t t \uf0de .", "eq_num": "(1)" } ], "section": "The Word Co-occurrence Augmented Methods", "sec_num": "3." }, { "text": "Note that each bi-term is unordered. For a real case example, we have a document and the context is \"I visit apple store\". Because \"I\" is a stop-word, we remove it. The remaining three terms \"visit\", \"apple\" and \"store\" will generate three bi-terms \"visit apple\", \"apple store\", and \"visit store\". We generate all possible bi-terms for each document and put all bi-terms in the bi-term set B.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Word Co-occurrence Augmented Methods", "sec_num": "3." }, { "text": "Second, we describe the parameter estimation of the BTM. The aim of the parameter estimation of BTM is to estimate the topic assignment z, the corpus-topic posteriori distribution \uf071 and the topic-word posteriori distribution \uf066. But the Gibbs sampling can integrate \uf071\uf020 and \uf066\uf020 due to use the conjugate priors. Thus, the only one parameter z should be estimate. Clearly, we should assign a suitable topic for each bi-term. The Gibbs sampling equation shows below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Word Co-occurrence Augmented Methods", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( | , , , ) b P z k \uf071 \uf06a \uf0d8 \uf03d \uf0b5 \uf0d7 z B \u03b1 \u03b2 ,", "eq_num": "(2)" } ], "section": "The Word Co-occurrence Augmented Methods", "sec_num": "3." }, { "text": "where z is the topic assignment, k means the kth topic, B is the bi-term set, \uf061 is the corpus-topic prior distribution and \u03b2 is the topic-word prior distribution. The \uf071\uf020 and \uf066\uf020 in Eq.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Word Co-occurrence Augmented Methods", "sec_num": "3." }, { "text": "(2) show following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Word Co-occurrence Augmented Methods", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ", , 1 ( ) ( ) k b k K k b k k n n \uf061 \uf071 \uf061 \uf0d8 \uf0d8 \uf03d \uf02b \uf03d \uf02b \uf0e5 ,", "eq_num": "(3)" } ], "section": "The Word Co-occurrence Augmented Methods", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 1 2 2 , ,", "eq_num": ", , 1 1 (" } ], "section": "The Word Co-occurrence Augmented Methods", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ") ( ) ( ) ( ) k b k k b k t t t t k b k k b k w w w w V V w w w w t t n n n n \uf062 \uf062 \uf06a \uf062 \uf062 \uf0d8 \uf0d8 \uf0d8 \uf0d8 \uf03d \uf03d \uf02b \uf02b \uf03d \uf0b4 \uf02b \uf02b \uf0e5 \uf0e5 ,", "eq_num": "(4)" } ], "section": "The Word Co-occurrence Augmented Methods", "sec_num": "3." }, { "text": "where V is the number of unique words in the corpus, n k,-b is the statistical count for the document-topic distribution, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Word Co-occurrence Augmented Methods", "sec_num": "3." }, { "text": ", t w k b", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Word Co-occurrence Augmented Methods", "sec_num": "3." }, { "text": "n \uf0d8 is the statistical count for the document-topic distribution. When the frequency of bi-term is high the two terms in this bi-term tend to be put into the same topic. Otherwise, to overcome the lack of words in a single document BTM abandons the document-level directly. A topic in BTM contains several bi-term and a bi-term crosses many documents. BTM emphasizes that the co-occurrence information comes from all bi-terms in whole corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Word Co-occurrence Augmented Methods", "sec_num": "3." }, { "text": "However, just consider the frequency of bi-term in corpus-level will generate the topics which contain too many common words. To solve this problem, we consider the Pointwise Mutual Information (PMI) (Church & Hanks, 1990 ). Since the PMI score not only considers the co-occurrence frequency of the two words, but also normalizes by the single word frequency. Thus, we want to apply PMI score in the original BTM. A suitable way to apply PMI scores is modifying the priors in the BTM. The reason is that the priors modifying will not increase the complexity in the generation model and very intuitive. Clearly, there are two kinds of priors in BTM which are \u03b2-prior and \u03b2-priors. The \u03b2-prior is a corpus-topic bias without the data. While the \u03b2-priors are topic-word biases without the data. Applying the PMI score to the \u03b2-priors is the only one choice because we can adjust the degree of the word co-occurrence by modifying the distributions in the \u03b2-priors. For example, we assume that a topic contains three words \"pen\", \"apple\" and \"banana\". In the symmetric priors, we set <0.1, 0.1, 0.1> which means no bias of these three words, while we can apply <0.1, 0.5, 0.5> to enhance the word co-occurrence of \"apple\" and \"banana\". Thus the topic will prefer to put the \"apple\" and \"banana\" together in the topic sampling step. Figure 3 shows our PMI-\u03b2-priors approach. After pre-procession, we first calculate the PMI score of each bi-term as ( , ) PMI( , ) log ( ) ( )", "cite_spans": [ { "start": 200, "end": 221, "text": "(Church & Hanks, 1990", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 1327, "end": 1335, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "The Word Co-occurrence Augmented Methods", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "x y x y x y p w w w w p w p w \uf03d ,", "eq_num": "(5)" } ], "section": "Figure 3. The PMI-\u03b2 priors approach", "sec_num": null }, { "text": "Because the priors can view as an additional statistics count of the target probability, the value ordinarily should be greater than or equal to zero. Thus, we adjust the value of NPMI to [0, 2] by adding one as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3. The PMI-\u03b2 priors approach", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "PMI( , ) NPMI( , ) 1 log ( , ) x y x y x y w w w w p w w \uf03d \uf02b \uf02d .", "eq_num": "(6)" } ], "section": "Figure 3. The PMI-\u03b2 priors approach", "sec_num": null }, { "text": "After getting the NPMI scores, we transform these scores to meet the \u03b2-priors. Let \u03b2 SYM is the original symmetric \u03b2-priors and the PMI \u03b2-priors, denote \u03b2 PMI , define as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3. The PMI-\u03b2 priors approach", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ", SYM PMI 0.1 NPMI( , ) x y w w x y w w \uf062 \uf062 \uf03d \uf02b \uf0b4 .", "eq_num": "(7)" } ], "section": "Figure 3. The PMI-\u03b2 priors approach", "sec_num": null }, { "text": "There is a constant value 0.1 in Eq. (7). This constant value 0.1 prevent the target probability being dominated by the priors. The partial of the word co-occurrence information should still be captured by the original model and the priors provide the additional information to enhance the word co-occurrence in the model. The following shows how we apply PMI-\u03b2 -priors into the BTM. We apply the \u03b2 PMI of w1 and w2 in Eq. 6 ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3. The PMI-\u03b2 priors approach", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( ) ( ) ( ) ( ) k b k b t t t t k b k k b k w w w w w w V V w w w w t t n n n n \uf062 \uf062 \uf06a \uf062 \uf062 \uf0d8 \uf0d8 \uf0d8 \uf0d8 \uf03d \uf03d \uf02b \uf02b \uf03d \uf0b4 \uf02b \uf02b \uf0e5 \uf0e5 .", "eq_num": ", , 1 1" } ], "section": "Figure 3. The PMI-\u03b2 priors approach", "sec_num": null }, { "text": "Finally, we sample topic assignments by Gibbs sampling (Liu, 1994) approach.", "cite_spans": [ { "start": 55, "end": 66, "text": "(Liu, 1994)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 3. The PMI-\u03b2 priors approach", "sec_num": null }, { "text": "How to justly evaluate the quality of the topic model is still a problem. The reason is that the topic model is an unsupervised method. There are no prominent words or labels can directly assign to each topic. Thus, many researchers apply topic model in other applications, such as clustering, classification and information retrieval (Blei et al., 2003; Yan et al., 2013) . In classification task, instead of using the original word vectors to identify the document categories, it use the reduced vectors which generating from the topic model. The topic model plays as a dimensional reduction role and the classification result shows how well the model to represent the original features. Topic model can also look as the document clustering approach by just considering a document assign to which topic(s). In this paper, we evaluate topic models by clustering and classification tasks. Otherwise, to make our experiment more robust, we adopt two different types of short text dataset -Twitter2011 and ETtoday Chinese news title. The properties of these two corpus are different. The text of ETtoday Chinese news title is very regular, while the text of Twitter2011 usually contains emotional words, simplified texts and some unformed words. For example, \"haha\" is the emotional word, and \"agreeeee\" is the unformed word. Table 1 shows the statistics of short text datasets. The number of average words per document is not more than ten words. The number of documents in each class are shown in Figure 4 . The property of both two dataset is skew. The skew dataset may cause the results that the fewer documents are dominated by the larger one. In summary, the challenges of these two datasets are not only the short text problem but also the unbalance category. The top-3 classes in the Twitter2011 dataset are \"#jan25\", \"#superbowl\" and \"#sotu\". And the top-3 classes in the ETtoday News Title dataset are \"entertainment\", \"physical\" and \"political\". ", "cite_spans": [ { "start": 335, "end": 354, "text": "(Blei et al., 2003;", "ref_id": "BIBREF1" }, { "start": 355, "end": 372, "text": "Yan et al., 2013)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 1324, "end": 1331, "text": "Table 1", "ref_id": "TABREF2" }, { "start": 1497, "end": 1505, "text": "Figure 4", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Experiments", "sec_num": "4." }, { "text": "All of the experiments were done on the Intel i7 3.4 GHz CPU and 16G memory PC. All of the pre-process and topic models were written by JAVA code. The parameters \uf061\uf020 priors and the base \u03b2 priors of topic models are all set <0.1>. The number of iterations in Gibbs sampling is set 1,000. To make our results more reliable, we run each experiments 10 times and average these scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "For the clustering experiment, we first get the document-topic posteriori probability distribution \uf066\uf020 and we use the highest probability topic P(z|d) as the cluster assignment for each document in \uf066. For the classification experiment, we divide our dataset into five parts in which four parts for training and one for testing. After training the topic model, we fix the topic-word distribution \uf066\uf020\uf020and then we re-infer document-topic posteriori probability Class ID for ETtoday Dataset distribution \uf071\uf020 of all original short text documents. Instead of using the original word vectors to do the classification task, we take this re-inferred posteriori probability distribution \uf071\uf020 as the reduced feature matrix. Finally we use this reduced feature matrix to classify the documents by LIBLINEAR 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "We compare our methods with the previous topic models: 1) LDA, 2) Mixture of unigrams, and 3) BTM. In addition to the above three topic models, we also compare with our PCA-\u03b2 priors methods. We use the principal component analysis (PCA) to discover the whole corpus principal component. Then, we transform the principal component to the topic-word prior distribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "In this part, we list three criteria for the clustering experiment and one for classification. In the clustering experiment, let \uf057 = {\uf077 1 , \uf077 2 , ... , \uf077 K } is the output cluster labels, and C = {c 1 , c 2 , ... , c p } is the gold standard labels of the documents. We first describe the three criteria for the clustering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Criteria", "sec_num": "4.2" }, { "text": "Purity is a simple and transparent measure which perform the accuracy of all cluster assignments as the following equation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\uf0b7 Purity", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "max Purity( ,C) k j j k c N \uf076 \uf0c7 \uf057 \uf03d \uf0e5 ,", "eq_num": "(9)" } ], "section": "\uf0b7 Purity", "sec_num": null }, { "text": "where N is the total number of documents. Note that the high purity is easy to achieve when the number of clusters is large. In particular, purity is 1 if each document gets its own cluster.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\uf0b7 Purity", "sec_num": null }, { "text": "NMI score is based on the information theory. Let I(\uf057, C) denotes the mutual information between the output cluster \uf057\uf020 and the gold standard cluster C. The mutual information of NMI is normalized by each entropy denoted H(\uf057\uf029 and H(C). This normalization can avoid the influence of the number of clusters. The equation of NMI shows following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\uf0b7 Normalized Mutual Information (NMI)", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "I( ,C) NMI( ,C) [H( ) H( )] 2 C \uf057 \uf057 \uf03d \uf057 \uf02b ,", "eq_num": "(10)" } ], "section": "\uf0b7 Normalized Mutual Information (NMI)", "sec_num": null }, { "text": "where \uf049\uf028\uf057\uf02c\uf020C\uf029,\uf020\uf048\uf028\uf057\uf029\uf020and H\uf028\uf057\uf029 denote:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\uf0b7 Normalized Mutual Information (NMI)", "sec_num": null }, { "text": "1 http://www.csie.ntu.edu.tw/~cjlin/liblinear/ P( ) I( ,C) P( )log P( ) P( )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\uf0b7 Normalized Mutual Information (NMI)", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "k j k j k j k j c c c \uf076 \uf076 \uf076 \uf0c7 \uf057 \uf03d \uf0c7 \uf0e5\uf0e5 ,", "eq_num": "(11)" } ], "section": "\uf0b7 Normalized Mutual Information (NMI)", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "H( ) P( )logP( ) k k k \uf076 \uf076 \uf057 \uf03d \uf02d \uf0e5 .", "eq_num": "(12)" } ], "section": "\uf0b7 Normalized Mutual Information (NMI)", "sec_num": null }, { "text": "Rand Index (RI) (Rand, 1971) consider the clustering result as a pair-wise decision. More clearly, RI penalizes both true positive and true negative decisions during clustering. If two documents are both in the same class and the same cluster, or both in different classes and different clusters, this decision is correct. For other cases, the decision is false. The equation of RI shows following:", "cite_spans": [ { "start": 16, "end": 28, "text": "(Rand, 1971)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "\uf0b7 Rand Index", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "TP TN RI TP FP FN TN \uf02b \uf03d \uf02b \uf02b \uf02b ,", "eq_num": "(13)" } ], "section": "\uf0b7 Rand Index", "sec_num": null }, { "text": "where TP, FP, FN, and TN are the true positive count, false positive count, false negative count and true negative count respectively. For the classification experiment, we adopt the accuracy as the measure. The definition of the accuracy is the same as the RI score in Eq. 13, but just change the cluster label to the classification label.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\uf0b7 Rand Index", "sec_num": null }, { "text": "The Twitter2011 dataset was published in TREC 2011 microblog track 2 . It contains approximately 16 million tweets sampled between January 23rd and February 8th, 2011. It is worth mentioning that there are some semantics tags, called hashtag, in some tweets. The hashtags had been given when the author wrote a tweet. Because these hashtags can identify the semantics of tweets, we use the hashtags as our ground truth for both clustering and classification experiments. However, there are about 10 percentages of all tweets contain hashtags and some hashtags are very rare. Also, there are contains multilingual tweets. To reduce the effect of noise in this dataset, we just extract the English tweets with top-50 frequent hashtags. After tweet extraction, we totally get the 49,461 tweets. Then, we remove the hashtags and stop-words from the context. Finally, we stem all the words in all tweets by the English stemming in the Snowball library. Table 2 shows the clustering results on the Twitter2011 dataset, when we set the number of topic to 50. As expected, BTM is better than Mixture of unigram and LDA got the worst result when we adopt the symmetric priors <0.1>. When apply the PMI-\u03b2 priors, we get the better result than BTM with symmetric priors. Otherwise, our baseline method, PCA-\u03b2, is better than the original LDA because the PCA-\u03b2 prior can make up the lack of the global word co-occurrence information in the original LDA. Figure 5 shows the classification results on the Twitter2011 dataset by using LIBLINEAR classifier. When apply the PMI-\u03b2 priors, we get the better result than BTM with symmetric priors. Table 3 presents the top-10 topic words of the \"job\" topic in the Twitter2011 dataset for LDA, mixture of unigram, BTM and PMI-\u03b2-BTM respectively, when the number of topic is 70. The top-10 words are the 10 highest probability words of the topics. The bold words in this table are the words which highly correlated with the topic by the The number of topic K LDA Mix BTM PMI-beta BTM human judgment. The topic words in the LDA and mixture of unigram models are almost non-correlated or low-correlated with the topic \"job\", such as \"jay\" and \"emote\". In BTM and PMI-\u03b2-BTM, the model capture the more high-correlated words, such as \"engineer\" and \"management\". ", "cite_spans": [], "ref_spans": [ { "start": 948, "end": 955, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 1442, "end": 1450, "text": "Figure 5", "ref_id": "FIGREF2" }, { "start": 1628, "end": 1635, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Experimental Results for the Twitter2011 Dataset", "sec_num": "4.3" }, { "text": "The ETtoday News Title dataset is collected from the overview list of the ETtoday News website 3 between January 1st and January 31, 2015. There are totally 25 predefined news labels in the dataset. These labels include some classical news category such as \"society news\", \"international news\" and \"political news\", and some special news category such as \"animal and pets\", \"3C\" and \"games\". In both the clustering and the classification experiments, we use these labels as the ground-truth. Because the Chinese text does not contain the break word, we must adopt the additional word breaker in the pre-process step. We adopt the jieba 4 , the Python Chinese word segmentation module, to segment all news title into several words. Figure 6 shows the classification results on the ETtoday News Title dataset. The three original topic model LDA, mixture of unigram, and BTM perform the same order as the results of the Tweet2011 dataset. The PMI-\u03b2 BTM is outperform all other methods. Our PMI-\u03b2-BTM is also suitable to model the regular short text. The top-10 topic words of the \"baseball\" topic of ETtoday news title dataset lists in the Table 4 . Because these words are almost Chinese, we also attach the simple explanation in English. There are many non-related words in the LDA and mixture of unigram, such as \"\u5e74\u7d42\" (Year-end bonuses) and \"\u4e0d\" (no). Especially, we compare the topic words in BTM with in PMI-\u03b2-BTM, the topic words in BTM contain some frequent but low-correlated words with the topic, such as \"\u5e74\" (means year) and \"\u842c\" (means ten thousand). While in the PMI-\u03b2-BTM, this noisy words do not appear. The reason is that the original BTM just consider the simple bi-term frequency and this bi-term frequency make some frequent words be extracted together with other words from the document. Our PMI-\u03b2 priors can decrease the probability of the common words by the word normalization effect in the PMI. ", "cite_spans": [], "ref_spans": [ { "start": 731, "end": 739, "text": "Figure 6", "ref_id": "FIGREF4" }, { "start": 1137, "end": 1144, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Experimental Results for ETtoday News Title Dataset", "sec_num": "4.4" }, { "text": "In this paper, we propose a solution for topic model to enhance the amount of the word co-occurrence relation in the short text corpus. First, we find the BTM identifies the word co-occurrence by considering the bi-term frequency in the corpus-level. BTM will make the The number of topic K LDA Mix BTM PMI-beta BTM common words be performed excessively because the frequency of bi-term comes from the whole corpus instead of a short document. We propose a PMI-\u03b2 priors method to overcome this problem. The experimental results show our PMI-\u03b2-BTM get the best results in the regular short news title text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5." }, { "text": "Moreover, there are two advantages in our methods. We do not need any external data and the proposed two improvement of the word co-occurrence methods are both based on the original topic model and easy to extend. Bases on the original topic model means we did not modify the model itself, thus our methods can easily apply to some other existing BTM based models to overcome the short text problem without any modification. In the future, we can extend some other steps in PMI-priors to deal the further improvement, such as removing the redundant documents by clustering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5." }, { "text": "http://trec.nist.gov/data/tweets/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.ettoday.net/news/news-list.htm 4 https://github.com/fxsjy/jieba", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Probabilistic latent semantic analysis", "authors": [ { "first": "T", "middle": [], "last": "Hofmann", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence", "volume": "", "issue": "", "pages": "289--296", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hofmann, T. (1999). Probabilistic latent semantic analysis. In Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence, 289-296.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Latent dirichlet allocation. the", "authors": [ { "first": "D", "middle": [ "M" ], "last": "Blei", "suffix": "" }, { "first": "A", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "M", "middle": [ "I" ], "last": "Jordan", "suffix": "" } ], "year": 2003, "venue": "Journal of machine Learning research", "volume": "3", "issue": "", "pages": "993--1022", "other_ids": {}, "num": null, "urls": [], "raw_text": "Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent dirichlet allocation. the Journal of machine Learning research, 3, 993-1022.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A Survey on Topic Modeling", "authors": [ { "first": "M", "middle": [], "last": "Divya", "suffix": "" }, { "first": "K", "middle": [], "last": "Thendral", "suffix": "" }, { "first": "S", "middle": [], "last": "Chitrakala", "suffix": "" } ], "year": 2013, "venue": "International Journal of Recent Advances in Engineering & Technology (IJRAET)", "volume": "1", "issue": "", "pages": "57--61", "other_ids": {}, "num": null, "urls": [], "raw_text": "Divya, M., Thendral, K., & Chitrakala, S. (2013). A Survey on Topic Modeling. International Journal of Recent Advances in Engineering & Technology (IJRAET), 1, 57-61.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Optimizing semantic coherence in topic models", "authors": [ { "first": "D", "middle": [], "last": "Mimno", "suffix": "" }, { "first": "H", "middle": [ "M" ], "last": "Wallach", "suffix": "" }, { "first": "E", "middle": [], "last": "Talley", "suffix": "" }, { "first": "M", "middle": [], "last": "Leenders", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "262--272", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mimno, D., Wallach, H. M., Talley, E., Leenders, M., & McCallum, A. (2011). Optimizing semantic coherence in topic models. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 262-272.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A biterm topic model for short texts", "authors": [ { "first": "X", "middle": [], "last": "Yan", "suffix": "" }, { "first": "J", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Y", "middle": [], "last": "Lan", "suffix": "" }, { "first": "X", "middle": [], "last": "Cheng", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 22nd international conference on World Wide Web", "volume": "", "issue": "", "pages": "1445--1456", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yan, X., Guo, J., Lan, Y., & Cheng, X. (2013). A biterm topic model for short texts. In Proceedings of the 22nd international conference on World Wide Web, Rio de Janeiro, Brazil, 1445-1456.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Text classification from labeled and unlabeled documents using EM", "authors": [ { "first": "K", "middle": [], "last": "Nigam", "suffix": "" }, { "first": "A", "middle": [ "K" ], "last": "Mccallum", "suffix": "" }, { "first": "S", "middle": [], "last": "Thrun", "suffix": "" }, { "first": "T", "middle": [], "last": "Mitchell", "suffix": "" } ], "year": 2000, "venue": "Machine learning", "volume": "39", "issue": "2", "pages": "103--134", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nigam, K., McCallum, A. K., Thrun, S., & Mitchell, T. (2000). Text classification from labeled and unlabeled documents using EM. Machine learning, 39(2), 103-134.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Comparing twitter and traditional media using topic models", "authors": [ { "first": "W", "middle": [ "X" ], "last": "Zhao", "suffix": "" }, { "first": "J", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "J", "middle": [], "last": "Weng", "suffix": "" }, { "first": "J", "middle": [], "last": "He", "suffix": "" }, { "first": "E.-P", "middle": [], "last": "Lim", "suffix": "" }, { "first": "H", "middle": [], "last": "Yan", "suffix": "" } ], "year": 2011, "venue": "Advances in Information Retrieval", "volume": "", "issue": "", "pages": "338--349", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhao, W. X., Jiang, J., Weng, J., He, J., Lim, E.-P., Yan, H., et al. (2011). Comparing twitter and traditional media using topic models. In Advances in Information Retrieval. ed: Springer, 338-349.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "BTM: Topic Modeling over Short Texts. Knowledge and Data Engineering", "authors": [ { "first": "X", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "X", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Y", "middle": [], "last": "Lan", "suffix": "" }, { "first": "J", "middle": [], "last": "Guo", "suffix": "" } ], "year": 2014, "venue": "IEEE Transactions on", "volume": "26", "issue": "12", "pages": "2928--2941", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cheng, X., Yan, X., Lan, Y., & Guo, J. (2014). BTM: Topic Modeling over Short Texts. Knowledge and Data Engineering, IEEE Transactions on, 26(12), 2928-2941.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Word association norms, mutual information, and lexicography. Computational linguistics", "authors": [ { "first": "K", "middle": [ "W" ], "last": "Church", "suffix": "" }, { "first": "P", "middle": [], "last": "Hanks", "suffix": "" } ], "year": 1990, "venue": "", "volume": "16", "issue": "", "pages": "22--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Church, K. W., & Hanks, P. (1990). Word association norms, mutual information, and lexicography. Computational linguistics, 16(1), 22-29.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Rethinking LDA: Why priors matter", "authors": [ { "first": "H", "middle": [ "M" ], "last": "Wallach", "suffix": "" }, { "first": "D", "middle": [], "last": "Minmo", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2009, "venue": "Advances in Neural Information Processing Systems", "volume": "22", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wallach, H. M., Minmo, D., & McCallum, A. (2009). Rethinking LDA: Why priors matter. In Advances in Neural Information Processing Systems 22 (NIPS 2009).", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "An introduction to latent semantic analysis", "authors": [ { "first": "T", "middle": [ "K" ], "last": "Landauer", "suffix": "" }, { "first": "P", "middle": [ "W" ], "last": "Foltz", "suffix": "" }, { "first": "D", "middle": [], "last": "Laham", "suffix": "" } ], "year": 1998, "venue": "Discourse processes", "volume": "25", "issue": "", "pages": "259--284", "other_ids": {}, "num": null, "urls": [], "raw_text": "Landauer, T. K., Foltz, P. W., & Laham, D. (1998). An introduction to latent semantic analysis. Discourse processes, 25(2&3), 259-284.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Topic modeling: beyond bag-of-words", "authors": [ { "first": "H", "middle": [ "M" ], "last": "Wallach", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 23rd international conference on Machine learning", "volume": "", "issue": "", "pages": "977--984", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wallach, H. M. (2006). Topic modeling: beyond bag-of-words. In Proceedings of the 23rd international conference on Machine learning, 977-984.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Topical n-grams: Phrase and topic discovery, with an application to information retrieval", "authors": [ { "first": "X", "middle": [], "last": "Wang", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "X", "middle": [], "last": "Wei", "suffix": "" } ], "year": 2007, "venue": "Seventh IEEE International Conference on Data Mining", "volume": "", "issue": "", "pages": "697--702", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wang, X., McCallum, A., & Wei, X. (2007). Topical n-grams: Phrase and topic discovery, with an application to information retrieval. In Seventh IEEE International Conference on Data Mining (ICDM 2007), 697-702.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Integrating topics and syntax", "authors": [ { "first": "T", "middle": [ "L" ], "last": "Griffiths", "suffix": "" }, { "first": "M", "middle": [], "last": "Steyvers", "suffix": "" }, { "first": "D", "middle": [ "M" ], "last": "Blei", "suffix": "" }, { "first": "J", "middle": [ "B" ], "last": "Tenenbaum", "suffix": "" } ], "year": 2004, "venue": "Advances in neural information processing systems", "volume": "17", "issue": "", "pages": "537--544", "other_ids": {}, "num": null, "urls": [], "raw_text": "Griffiths, T. L., Steyvers, M., Blei, D. M., & Tenenbaum, J. B. (2004). Integrating topics and syntax. In Advances in neural information processing systems 17, 537-544.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Pachinko allocation: DAG-structured mixture models of topic correlations", "authors": [ { "first": "W", "middle": [], "last": "Li", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 23rd international conference on Machine learning", "volume": "", "issue": "", "pages": "577--584", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, W. & McCallum, A. (2006). Pachinko allocation: DAG-structured mixture models of topic correlations. In Proceedings of the 23rd international conference on Machine learning, 577-584.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Mining topics in documents: standing on the shoulders of big data", "authors": [ { "first": "Z", "middle": [], "last": "Chen", "suffix": "" }, { "first": "B", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining", "volume": "", "issue": "", "pages": "1116--1125", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, Z. & Liu, B. (2014). Mining topics in documents: standing on the shoulders of big data. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, New York, New York, USA, 1116-1125.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "PET: a statistical model for popular events tracking in social communities", "authors": [ { "first": "C", "middle": [ "X" ], "last": "Lin", "suffix": "" }, { "first": "B", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Q", "middle": [], "last": "Mei", "suffix": "" }, { "first": "J", "middle": [], "last": "Han", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining", "volume": "", "issue": "", "pages": "929--938", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, C. X., Zhao, B., Mei, Q., & Han, J. (2010). PET: a statistical model for popular events tracking in social communities. In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining, 929-938.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Characterizing Microblogs with Topic Models", "authors": [ { "first": "D", "middle": [], "last": "Ramage", "suffix": "" }, { "first": "S", "middle": [ "T" ], "last": "Dumais", "suffix": "" }, { "first": "D", "middle": [ "J" ], "last": "Liebling", "suffix": "" } ], "year": 2010, "venue": "Fourth International AAAI Conference on Weblogs and Social Media", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ramage, D., Dumais, S. T., & Liebling, D. J. (2010). Characterizing Microblogs with Topic Models. In Fourth International AAAI Conference on Weblogs and Social Media.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Short and tweet: experiments on recommending content from information streams", "authors": [ { "first": "J", "middle": [], "last": "Chen", "suffix": "" }, { "first": "R", "middle": [], "last": "Nairn", "suffix": "" }, { "first": "L", "middle": [], "last": "Nelson", "suffix": "" }, { "first": "M", "middle": [], "last": "Bernstein", "suffix": "" }, { "first": "E", "middle": [], "last": "Chi", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the SIGCHI Conference on Human Factors in Computing Systems", "volume": "", "issue": "", "pages": "1185--1194", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, J., Nairn, R., Nelson, L., Bernstein, M., & Chi, E. (2010). Short and tweet: experiments on recommending content from information streams. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1185-1194.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Using twitter to recommend real-time topical news", "authors": [ { "first": "O", "middle": [], "last": "Phelan", "suffix": "" }, { "first": "K", "middle": [], "last": "Mccarthy", "suffix": "" }, { "first": "B", "middle": [], "last": "Smyth", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the third ACM conference on Recommender systems", "volume": "", "issue": "", "pages": "385--388", "other_ids": {}, "num": null, "urls": [], "raw_text": "Phelan, O., McCarthy, K., & Smyth, B. (2009). Using twitter to recommend real-time topical news. In Proceedings of the third ACM conference on Recommender systems, 385-388.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Learning to classify short and sparse text & web with hidden topics from large-scale data collections", "authors": [ { "first": "X.-H", "middle": [], "last": "Phan", "suffix": "" }, { "first": "L.-M", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "S", "middle": [], "last": "Horiguchi", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 17th international conference on World Wide Web", "volume": "", "issue": "", "pages": "91--100", "other_ids": {}, "num": null, "urls": [], "raw_text": "Phan, X.-H., Nguyen, L.-M., & Horiguchi, S. (2008). Learning to classify short and sparse text & web with hidden topics from large-scale data collections. In Proceedings of the 17th international conference on World Wide Web, 91-100.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "The author-topic model for authors and documents", "authors": [ { "first": "M", "middle": [], "last": "Rosen-Zvi", "suffix": "" }, { "first": "T", "middle": [], "last": "Griffiths", "suffix": "" }, { "first": "M", "middle": [], "last": "Steyvers", "suffix": "" }, { "first": "P", "middle": [], "last": "Smyth", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 20th conference on Uncertainty in artificial intelligence", "volume": "", "issue": "", "pages": "487--494", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rosen-Zvi, M., Griffiths, T., Steyvers, M., & Smyth, P. (2004). The author-topic model for authors and documents. In Proceedings of the 20th conference on Uncertainty in artificial intelligence, 487-494.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Transferring topical knowledge from auxiliary long texts for short text clustering", "authors": [ { "first": "O", "middle": [], "last": "Jin", "suffix": "" }, { "first": "N", "middle": [ "N" ], "last": "Liu", "suffix": "" }, { "first": "K", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Y", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Q", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 20th ACM international conference on Information and knowledge management", "volume": "", "issue": "", "pages": "775--784", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jin, O., Liu, N. N., Zhao, K., Yu, Y., & Yang, Q. (2011). Transferring topical knowledge from auxiliary long texts for short text clustering. In Proceedings of the 20th ACM international conference on Information and knowledge management, 775-784.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Empirical study of topic modeling in twitter", "authors": [ { "first": "L", "middle": [], "last": "Hong", "suffix": "" }, { "first": "B", "middle": [ "D" ], "last": "Davison", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the First Workshop on Social Media Analytics", "volume": "", "issue": "", "pages": "80--88", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hong, L. & Davison, B. D. (2010). Empirical study of topic modeling in twitter. In Proceedings of the First Workshop on Social Media Analytics, 80-88.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "The collapsed Gibbs sampler in Bayesian computations with applications to a gene regulation problem", "authors": [ { "first": "J", "middle": [ "S" ], "last": "Liu", "suffix": "" } ], "year": 1994, "venue": "Journal of the American Statistical Association", "volume": "89", "issue": "427", "pages": "958--966", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liu, J. S. (1994). The collapsed Gibbs sampler in Bayesian computations with applications to a gene regulation problem. Journal of the American Statistical Association, 89(427), 958-966.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Objective criteria for the evaluation of clustering methods", "authors": [ { "first": "W", "middle": [ "M" ], "last": "Rand", "suffix": "" } ], "year": 1971, "venue": "Journal of the American Statistical association", "volume": "66", "issue": "336", "pages": "846--850", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rand, W. M. (1971). Objective criteria for the evaluation of clustering methods. Journal of the American Statistical association, 66(336), 846-850.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Labeled LDA: A supervised topic model for credit attribution in multi-labeled corpora", "authors": [ { "first": "D", "middle": [], "last": "Ramage", "suffix": "" }, { "first": "D", "middle": [], "last": "Hall", "suffix": "" }, { "first": "R", "middle": [], "last": "Nallapati", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", "volume": "1", "issue": "", "pages": "248--256", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ramage, D., Hall, D., Nallapati, R., & Manning, C. D. (2009). Labeled LDA: A supervised topic model for credit attribution in multi-labeled corpora. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, 1, 248-256.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "type_str": "figure", "text": "The number of documents in each class" }, "FIGREF2": { "num": null, "uris": null, "type_str": "figure", "text": "The Classification Results on Twitter2011 dataset" }, "FIGREF4": { "num": null, "uris": null, "type_str": "figure", "text": "The Classification Results on ETtoday dataset Table 4. The top-10 topic words of the \"baseball\" topic in ETtoday News Title dataset Top-10 Topic words LDA \u4e2d\u8077 (baseball game in Taiwan), \u6708 (month), \u842c (ten thousand), \u5e74 (year), \u5927 (big), \u5143 (dollars), \u5433\u8a8c\u63da (a politician), \u81fa\u5317 (Taipei), \u81fa\u7063 (Taiwan), \u5e74\u7d42 (Year-end bonuses) Mix \u4e2d\u8077, \u65e5 (day), \u81fa\u7063, \u5927, \u82f1\u96c4 (hero), \u806f\u76df (league baseball), \u4e16\u754c (world), \u68d2\u7403 (baseball), \u4e0d (no), \u6311\u6230 (challenge) BTM \u4e2d \u8077 , \u7fa9 \u5927 (a baseball team), \u5144 \u5f1f (a baseball team), MLB, \u7d71 \u4e00 (a baseball team), \u5e74, \u6843\u733f (a baseball team), \u842c, \u7345 (a baseball team), \u4eba (human) PMI-\uf062-BTM \u4e2d \u8077 , MLB, \u5144 \u5f1f , \u65e5 \u8077 (baseball game in Japan), \u68d2 \u7403 , \u6843 \u733f , \u5148 \u767c (Starting Pitcher), \u7e3d\u51a0\u8ecd (champion), \u9673\u5049\u6bb7 (a Taiwanese professional baseball pitcher), \u7d71\u4e00 (a baseball team)" }, "TABREF0": { "text": "", "num": null, "html": null, "type_str": "table", "content": "
2013;
General Documentsstream Topics 0.15
Twitch Plays Pok\u00e9mon is a social experiment and channel on the videochannel 0.45 video 0.40
streaming website Twitch, consisting of
The concept was developed by ... a crowdsourced attempt to play Game Freak's and Nintendo's Pok\u00e9mon video games by parsing commands sent by users through the channel's chat room.Topic distribution ...... social experiment 0.15 0.12 crowdsource 0.53
twitch0.12
game0.35
video0.53
apple0.45
David @GuysWithPride This is an apple. HAAAbanana 0.25 fruit 0.15
... Topic distributionfood chicken 0.36 0.13 ...
...
haaa 0.12
hi0.35
noooo 0.53
" }, "TABREF2": { "text": "", "num": null, "html": null, "type_str": "table", "content": "
PropertyTwitter2011ETtoday News title
The number of documents49,46117,814
The number of domains5025
The number of distinct words30,42131,217
Avg. words per document5.929.25
" }, "TABREF3": { "text": "", "num": null, "html": null, "type_str": "table", "content": "
Model\uf062 priorsPurityNMIRI
LDA<0.100> PCA-\uf0620.4174 0.43480.3217 0.33250.9127 0.9266
Mix<0.100> PCA-\uf0620.4217 0.37480.3358 0.33050.8687 0.7550
<0.100>0.43180.34290.9092
BTMPCA-\uf0620.43670.40000.8665
PMI-\uf0620.44270.39270.9284
" }, "TABREF4": { "text": "", "num": null, "html": null, "type_str": "table", "content": "
. The top-10 topic words of the \"job\" topic in Twitter2011 dataset
Top-10 Topic words
LDAjob, house, jay, steal, material, burglary, construct, park, pick, ur
Mixjob, robbery, material, construct, steal, warehouse, emote, feel, woman, does
" } } } }