{ "paper_id": "O12-2004", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:03:15.598837Z" }, "title": "A Comparative Study of Methods for Topic Modeling in Spoken Document Retrieval", "authors": [ { "first": "Shih-Hsiang", "middle": [], "last": "Lin", "suffix": "", "affiliation": {}, "email": "shlin@csie.ntnu.edu.tw" }, { "first": "Berlin", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan Normal University", "location": {} }, "email": "berlin@csie.ntnu.edu.tw" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Topic modeling for information retrieval (IR) has attracted significant attention and demonstrated good performance in a wide variety of tasks over the years. In this paper, we first present a comprehensive comparison of various topic modeling approaches, including the so-called document topic models (DTM) and word topic models (WTM), for Chinese spoken document retrieval (SDR). Moreover, different granularities of index features, including words, subword units, and their combinations, are also exploited to work in conjunction with various extensions of topic modeling presented in this paper, so as to alleviate SDR performance degradation caused by speech recognition errors. All of the experiments were performed on the TDT Chinese collection.", "pdf_parse": { "paper_id": "O12-2004", "_pdf_hash": "", "abstract": [ { "text": "Topic modeling for information retrieval (IR) has attracted significant attention and demonstrated good performance in a wide variety of tasks over the years. In this paper, we first present a comprehensive comparison of various topic modeling approaches, including the so-called document topic models (DTM) and word topic models (WTM), for Chinese spoken document retrieval (SDR). Moreover, different granularities of index features, including words, subword units, and their combinations, are also exploited to work in conjunction with various extensions of topic modeling presented in this paper, so as to alleviate SDR performance degradation caused by speech recognition errors. All of the experiments were performed on the TDT Chinese collection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Due to the advances in computer technology and the proliferation of Internet activity, huge volumes of multimedia data, such as text files, broadcast radio and television programs, lectures, and digital archives, are continuously growing and filling networks. Development of intelligent and efficient information retrieval techniques to provide people with easy access to all kinds of information is now becoming more and more emphasized. Meanwhile, with the rapid evolution of speech recognition technology, substantial efforts and very encouraging results on spoken document retrieval (SDR) also have been demonstrated in the recent past. Although most retrieval systems participating in the TREC-SDR evaluations claimed that speech recognition errors do not seem to cause much adverse effect on SDR performance when merely using imperfect recognition transcripts derived from one-best recognition results from a speech recognizer (Garofolo et al., 2000; Chelba et al., 2008) , this is probably attributed to the fact that the TREC-style test queries tend to be quite long and contain different words describing similar concepts that can help the queries match their relevant spoken documents. Furthermore, a query word (or phrase) may occur repeatedly (more than once) within a relevant spoken document, and it is not always the case that all of the occurrences of the word would be misrecognized totally as other words. We, however, believe that SDR would still present a challenge in situations where the queries are relatively short and there exists severe deviation in word usage between the queries and spoken documents.", "cite_spans": [ { "start": 933, "end": 956, "text": "(Garofolo et al., 2000;", "ref_id": "BIBREF7" }, { "start": 957, "end": 977, "text": "Chelba et al., 2008)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Among several promising information retrieval approaches, statistical language modeling (LM) (Ponte & Croft, 1998) , aiming to capture the regularity in human natural language and quantify the acceptability of a given word sequence, has continuously been a focus of active research in the last decade (Miller et al., 1999; Hofmann, 2001) . The basic idea is that each individual document in the collection is treated as a probabilistic language model for generating a given query. A document is deemed to be relevant to a query if its corresponding document language model generates the query with higher likelihood. In practice, the relevance measure for the LM approach is usually computed by two different matching strategies, namely, literal term matching and concept matching (Lee & Chen, 2005) . The unigram language model (ULM) is perhaps the most representative example for literal term matching strategy (Miller et al., 1999) . In the ULM approach, each document is interpreted as a generative model composed of a mixture of unigram (multinomial) distributions for observing a query, while the query is regarded as observations, expressed as a sequence of indexing words (or terms).", "cite_spans": [ { "start": 93, "end": 114, "text": "(Ponte & Croft, 1998)", "ref_id": "BIBREF17" }, { "start": 301, "end": 322, "text": "(Miller et al., 1999;", "ref_id": "BIBREF16" }, { "start": 323, "end": 337, "text": "Hofmann, 2001)", "ref_id": "BIBREF12" }, { "start": 781, "end": 799, "text": "(Lee & Chen, 2005)", "ref_id": "BIBREF14" }, { "start": 913, "end": 934, "text": "(Miller et al., 1999)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Nevertheless, these approaches would suffer from the problems of word usage diversity, which might make the retrieval performance of the system degrade severely as a given query and its relevant documents are using quite a different set of words. In contrast, the concept matching strategy tries to explore the topic information conveyed in the query and documents. Based on this, the retrieval process is performed. The probabilistic latent semantic analysis (PLSA) (Hofmann, 2001 ) and the latent Dirichlet allocation (LDA) (Blei et al., 2003) are often considered to be two basic representatives of this category. They both introduce a set of latent topic variables to describe the \"word-document\" co-occurrence characteristics. More specifically, the relevance between a query and a document is not computed directly based on the frequency of the query words occurring in the document, but instead based on the frequency of these words appearing in the latent topics as well as the likelihood that the document generates those respective topics, which exhibits some sort of concept matching. Further, although there have been many follow-up studies and extensions of PLSA and LDA, it has been shown that more sophisticated (or complicated) topic models, such as the pachinko", "cite_spans": [ { "start": 467, "end": 481, "text": "(Hofmann, 2001", "ref_id": "BIBREF12" }, { "start": 526, "end": 545, "text": "(Blei et al., 2003)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Spoken Document Retrieval allocation model (PAM) and correlated topic model (CTM), do not necessarily offer further retrieval benefits (Zhai, 2008; Blei & Lafferty, 2009) . On the other hand, rather than treating each document as a whole as a document topic model (DTM), such as PLSA and LDA, the word topic model (WTM) (Chen, 2009) attempts to discover the long-span co-occurrence dependence \"between words\" through a set of latent topics, while each document in the collection consequently can be represented as a composite WTM model in an efficient way for predicting an observed query. Interested readers can refer to Griffiths et al. (2007) , Zhai (2008) , and Blei and Lafferty (2009) for a thorough and updated overview of the major topic-based language models that have been successfully developed and applied to various IR tasks.", "cite_spans": [ { "start": 135, "end": 147, "text": "(Zhai, 2008;", "ref_id": "BIBREF22" }, { "start": 148, "end": 170, "text": "Blei & Lafferty, 2009)", "ref_id": "BIBREF1" }, { "start": 320, "end": 332, "text": "(Chen, 2009)", "ref_id": "BIBREF4" }, { "start": 622, "end": 645, "text": "Griffiths et al. (2007)", "ref_id": "BIBREF9" }, { "start": 648, "end": 659, "text": "Zhai (2008)", "ref_id": "BIBREF22" }, { "start": 666, "end": 690, "text": "Blei and Lafferty (2009)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "A Comparative Study of Methods for Topic Modeling in 67", "sec_num": null }, { "text": "Although most of the above approaches can be equally applied to both text and spoken documents, the latter presents unique difficulties, such as speech recognition errors, problems posed by spontaneous speech, and redundant information. A straightforward remedy, apart from the conventional approaches target at improving recognition accuracy, is to develop more robust representations of spoken documents for spoken document retrieval (SDR). For example, multiple recognition hypotheses, beyond the top scoring ones, are expected to provide alternative representations for the confusing portions of the spoken documents (Chelba et al., 2008; Chia et al., 2008) . Another school of thought attempts to leverage subword units, as well as the combination of words and subword units, for representing the spoken documents, which also has been shown beneficial for SDR. The reason for the fusion of word-and subword-level information is that incorrectly recognized spoken words often include several subword units that are correctly recognized. Hence, the retrieval process based on subword-level representations may take advantage of partial matching (Lin & Chen, 2009 ).", "cite_spans": [ { "start": 621, "end": 642, "text": "(Chelba et al., 2008;", "ref_id": "BIBREF3" }, { "start": 643, "end": 661, "text": "Chia et al., 2008)", "ref_id": "BIBREF5" }, { "start": 1148, "end": 1165, "text": "(Lin & Chen, 2009", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "A Comparative Study of Methods for Topic Modeling in 67", "sec_num": null }, { "text": "With the above inspiration in mind, we first compare the structural characteristics of various topic models for Chinese SDR, including PLSA and LDA, as well as WTM. The utility of these models is thoroughly examined using both long and short test queries. Moreover, different granularities of index features, including words, subword units, and their combinations, are also exploited to work in conjunction with various extensions of topic modeling presented in this paper, so as to alleviate SDR performance degradation caused by imperfect recognition transcripts. To our knowledge, there is little literature on leveraging various topic decompositions together with various granularities of index features for topic modeling in SDR.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Comparative Study of Methods for Topic Modeling in 67", "sec_num": null }, { "text": "The rest of this paper is structured as follows. Section 2 elucidates the structural characteristics of the different types of topic models for the retrieval purpose. Section 3 discusses two different extensions of topic modeling. Section 4 describes the spoken document collection used in this paper, as well as the experimental setup. A series of experiments and associated discussions are presented in Section 5. Finally, Section 6 concludes this paper and ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Comparative Study of Methods for Topic Modeling in 67", "sec_num": null }, { "text": "In this section, we first describe the probabilistic generative framework for information retrieval. We then briefly review the document topic models (DTM), including the probabilistic latent semantic analysis (PLSA) (Hofmann, 2001) and the latent Dirichlet model (LDA) (Blei et al., 2003; Wei & Croft, 2006) , followed by an introduction to the word topic model (WTM) (Chen, 2009) , as well as the word Dirichlet topic model (WDTM).", "cite_spans": [ { "start": 217, "end": 232, "text": "(Hofmann, 2001)", "ref_id": "BIBREF12" }, { "start": 270, "end": 289, "text": "(Blei et al., 2003;", "ref_id": "BIBREF0" }, { "start": 290, "end": 308, "text": "Wei & Croft, 2006)", "ref_id": "BIBREF18" }, { "start": 369, "end": 381, "text": "(Chen, 2009)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Topic Models", "sec_num": "2." }, { "text": "When the language modeling approach is applied to IR, it basically makes use of a probabilistic generative framework for ranking each document D in the collection given a query Q , which can be expressed by ( ) P D Q . By applying Bayes' theorem, this ranking criterion can be approximated by the likelihood of Q generated by D , i.e., ( ) P Q D , when we assume that the prior probability of each document ( ) P D is uniformly distributed. For this idea to work, each document D is treated as a probabilistic language model M D for generating the query. Furthermore, if the query Q is treated as a sequence of words (or terms), 1 2 N Q w w w = \u2026 , where the query words are assumed to be conditionally independent given the document model M D and their order is also assumed to be of no importance (i.e., the so-called \"bag-of-words\" assumption), the relevance measure ( ) P Q D can be further decomposed as a product of the probabilities of the query words generated by the document:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Generative Framework", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( ) ( ) ( ) , M , i i c w Q i D w Q P Q D P w \u2208 = \u220f", "eq_num": "(1)" } ], "section": "Probabilistic Generative Framework", "sec_num": "2.1" }, { "text": "where ( ) , i c w Q is the number of times that each distinct word i w occurs in Q . The document ranking problem has now been reduced to the problem of constructing the document", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Generative Framework", "sec_num": "2.1" }, { "text": "model ( ) M i D P w .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Generative Framework", "sec_num": "2.1" }, { "text": "The simplest way to construct ( )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Generative Framework", "sec_num": "2.1" }, { "text": "M i D P w", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Generative Framework", "sec_num": "2.1" }, { "text": "is based on literal term matching, or using the unigram language model (ULM), where each document of the collection can respectively offer a unigram distribution for observing a query word, i.e., ( )", "cite_spans": [ { "start": 196, "end": 197, "text": "(", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Generative Framework", "sec_num": "2.1" }, { "text": "ULM M i D P w", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Generative Framework", "sec_num": "2.1" }, { "text": ", which is estimated on the basis of the words occurring in the document:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Generative Framework", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( ) ( ) ULM , | M , i i D c w D P w D =", "eq_num": "(2)" } ], "section": "Probabilistic Generative Framework", "sec_num": "2.1" }, { "text": "where ( ) is the number of words in the document. In order to avoid the problem of zero probability, the ULM is usually smoothed by a unigram distribution estimated from a general collection, i.e., ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Generative Framework", "sec_num": "2.1" }, { "text": "( ) ( ) ( ) ( ) ULM ULM ULM M 1 M , i i D i C P w D P w P w \u03bb \u03bb = \u22c5 + \u2212 \u22c5 (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Generative Framework", "sec_num": "2.1" }, { "text": "where \u03bb is a weighting parameter. It turns out that a document with more query words occurring in it would tend to receive a higher probability; further, the use of ( )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Generative Framework", "sec_num": "2.1" }, { "text": "ULM M i C P w", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Generative Framework", "sec_num": "2.1" }, { "text": "to some extent can help deemphasize common (non-informative) words but instead put more emphasis on discriminative (or informative) words for the purpose of document ranking (Zhai, 2008) . In the following,", "cite_spans": [ { "start": 174, "end": 186, "text": "(Zhai, 2008)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Generative Framework", "sec_num": "2.1" }, { "text": "( ) ULM M i D P w and ( ) ULM M i C P w", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Generative Framework", "sec_num": "2.1" }, { "text": "will be termed the document model and the background model, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Generative Framework", "sec_num": "2.1" }, { "text": "As mentioned earlier, there probably would be word usage mismatch between a query and a spoken document, even if they are topically related to each other. Therefore, instead of constructing the document model based on the literal term information, we can exploit probabilistic topic models to represent each spoken document through a latent topic space (Blei et al., 2010) . In this spectrum of research, each document D is regarded as a document", "cite_spans": [ { "start": 353, "end": 372, "text": "(Blei et al., 2010)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Document Topic Model (DTM)", "sec_num": "2.2" }, { "text": "topic model (DTM), consisting of a set of K shared latent topics { } 1 , , , , k K T T T \u2026 \u2026 with document-specific weights ( ) M k D P T , where each topic k T in turn offers a unigram distribution ( ) i k P w T", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document Topic Model (DTM)", "sec_num": "2.2" }, { "text": "for observing an arbitrary word of the language. For example, in the PLSA model, the probability of a word i w generated by a document D is expressed by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document Topic Model (DTM)", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( ) ( ) ( ) PLSA 1 \u039c \u039c . K i D i k k D k P w P w T P T = = \u2211", "eq_num": "(4)" } ], "section": "Document Topic Model (DTM)", "sec_num": "2.2" }, { "text": "The key idea we wish to illustrate here is that, for PLSA, the relevance measure of a query word i w and a document D is not computed directly based on the frequency of i w occurring in D , but instead based on the frequency of i w in the latent topic k T as well as the likelihood that D generates the respective topic k T , which in fact exhibits some sort of concept matching. A document is believed to be more relevant to the query if it has higher weights on some topics and the query words also happen to appear frequently in these topics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document Topic Model (DTM)", "sec_num": "2.2" }, { "text": "In the practical implementation of PLSA, the corresponding DTM models are usually trained in an unsupervised way by maximizing the total log-likelihood of the document collection D in terms of the unigram ( )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document Topic Model (DTM)", "sec_num": "2.2" }, { "text": "PLSA M i D P", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document Topic Model (DTM)", "sec_num": "2.2" }, { "text": "w of all words i w observed in the document collection, or, more specifically, the total likelihood of all documents generated by their own DTM models:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document Topic Model (DTM)", "sec_num": "2.2" }, { "text": "( ) ( ) ( ) PLSA PLSA , PLSA M M . i i D D c w D i D D w D L P D P w \u2208 \u2208 \u2208 = = \u220f \u220f \u220f D D (5) 70", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document Topic Model (DTM)", "sec_num": "2.2" }, { "text": "We can first use the K-means algorithm to partition the entire document collection into K topical classes. Hence, the initial topical unigram distribution ( ) i k P w T for a topical cluster can be estimated according to the underlying statistical characteristics of the document being assigned to it and the probabilities for each document generating the topics, i.e., ( ) M k D P T , are measured according to its proximity to the centroid of each respective cluster. Then, (5) can be iteratively optimized by the following three expectation-maximization (EM) (Dempster et al., 1977) updating equations:", "cite_spans": [ { "start": 562, "end": 585, "text": "(Dempster et al., 1977)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Shih-Hsiang Lin and Berlin Chen", "sec_num": null }, { "text": "-E (Expectation) Step ( ) ( ) ( ) ( ) ( ) ' ' ' | |M | ,M , | |M k i k k D k i D i k k D T P w T P T P T w P w T P T = \u2211 (6) -M (Maximization) Step ( ) ( ) ( ) ( ) ( ) , | ,M | , , | ,M i k i D D i k k D w D c w D P T w P w T c w D P T w = \u2211 \u2211 \u2211 (7) ( ) ( ) ( ) ( ) ' , | ,M | M , ', k D w k D w c w D P T w P T c w D = \u2211 \u2211 (8) where ( ) | ,M k i D P T w", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shih-Hsiang Lin and Berlin Chen", "sec_num": null }, { "text": "is the probability that the latent topic k T occurs given the word i w and the document model M D , which is computed using the probability quantities ( )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shih-Hsiang Lin and Berlin Chen", "sec_num": null }, { "text": "| i k P w T and ( ) | M k D P T", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shih-Hsiang Lin and Berlin Chen", "sec_num": null }, { "text": "obtained in the previous training iteration.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shih-Hsiang Lin and Berlin Chen", "sec_num": null }, { "text": "On the other hand, LDA, having a formula analogous to PLSA for document ranking, is regarded as a generalization of PLSA and has enjoyed considerable success in a wide variety of natural language processing (NLP) tasks. LDA differs from PLSA mainly in the inference of model parameters: PLSA assumes the model parameters are fixed and unknown; while LDA places additional a priori constraints on the model parameters, i.e., thinking of them as random variables that follow Dirichlet distributions. In other words, the total log-likelihood of all documents generated by LDA models is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shih-Hsiang Lin and Berlin Chen", "sec_num": null }, { "text": "( ) ( ) ( ) ( ) LDA 1 1 1 | | , D K K z D i k z k D k z D i L P p PwT PT d d \u03d5 \u03b2 \u03b8 \u03b1 \u03d5 \u03b8 \u03b8 \u03d5 = = \u2208 = \u239b \u239e = \u239c \u239f \u239c \u239f \u239d \u23a0 \u2211 \u220f \u220f \u220f \u222b \u222b D (9)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Shih-Hsiang Lin and Berlin Chen", "sec_num": null }, { "text": "where d \u03b8 and z \u03d5 are multinomial distributions with Dirichlet parameter \u03b1 and \u03b2 , respectively, and D is the number of words in the document D . LDA possesses fully consistent generative semantics by treating the topic mixture distribution as a K -parameter hidden random variable rather than a large set of individual parameters that are explicitly linked to the training set (Blei et al., 2003) . Compared to PLSA, LDA overcomes the problem of overfitting and the problem of generating new documents incurred by PLSA.", "cite_spans": [ { "start": 378, "end": 397, "text": "(Blei et al., 2003)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Shih-Hsiang Lin and Berlin Chen", "sec_num": null }, { "text": "Since LDA has a more complex form for model optimization, which is difficult to be solved by exact inference, several approximate inference algorithms, such as the variational Bayes approximation (Blei et al., 2003) , the expectation propagation method (Ypma et al., 2002) , and the Gibbs sampling algorithm (Griffiths, 2004) , have been proposed in the literature for estimating the model parameters of LDA. In this paper, we adopt the Gibbs sampling algorithm, where \u03b8 and \u03d5 are marginalized out and only the latent variables k T are sampled, to infer the model parameters. Then, the probability of a word i w generated by a document D in the LDA model is expressed by:", "cite_spans": [ { "start": 196, "end": 215, "text": "(Blei et al., 2003)", "ref_id": "BIBREF0" }, { "start": 253, "end": 272, "text": "(Ypma et al., 2002)", "ref_id": "BIBREF21" }, { "start": 308, "end": 325, "text": "(Griffiths, 2004)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Spoken Document Retrieval", "sec_num": null }, { "text": "( ) ( ) ( ), \u039c , , \u039c , , 1 LDA \u2211 = = K k D k k i D i T P T w P w P \u03b8 \u03c6 \u03b8 \u03c6 (10)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Spoken Document Retrieval", "sec_num": null }, { "text": "where \u03c6 and \u03b8 are the posterior estimates of \u03b8 and \u03d5 , respectively. We refer the readers to Griffiths and Steyvers (2004) for a better understanding of the detailed inference procedure.", "cite_spans": [ { "start": 93, "end": 122, "text": "Griffiths and Steyvers (2004)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Spoken Document Retrieval", "sec_num": null }, { "text": "Rather than treating each document in the collection as a document topic model, we can regard each word j w of the language as a word topic model (WTM). To get to this point, all words are assumed to share the same set of latent topic distributions but have different weights over these topics. The WTM model of each word j w for predicting the occurrence of a particular word i w can be expressed by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Topic Model (WTM)", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( ) ( ) ( ) WTM 1 | M | | M , j j K i w i k k w k P w P w T P T = = \u2211", "eq_num": "(11)" } ], "section": "Word Topic Model (WTM)", "sec_num": "2.3" }, { "text": "where ( )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Topic Model (WTM)", "sec_num": "2.3" }, { "text": "i k P w T and ( ) M j k w P T", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Topic Model (WTM)", "sec_num": "2.3" }, { "text": "are the probability of a word i w occurring in a specific latent topic k T and the probability of the topic k T conditioned on M j w , respectively. Then, each document naturally can be viewed as a composite WTM, while the relevance measure between a word i w and a document D can be expressed by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Topic Model (WTM)", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( ) ( ) ( ) WTM WTM ULM M M M , j j i D i w j D w D P w P w P w \u2208 = \u2211", "eq_num": "(12)" } ], "section": "Word Topic Model (WTM)", "sec_num": "2.3" }, { "text": "The resulting composite WTM model for D , in a sense, can be thought of as a kind of language model for translating words in D to i w .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Topic Model (WTM)", "sec_num": "2.3" }, { "text": "The model parameters of WTM can be inferred by unsupervised training as well. More precisely, each WTM model M j w can be trained by concatenating those words occurring in the vicinity of (or a context window of size S around) each occurrence of j w , which are postulated to be relevant to j w , to form a relevant observation sequence are also assumed to be conditionally independent, given M j w .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Topic Model (WTM)", "sec_num": "2.3" }, { "text": "Therefore, the WTM models of the words in the vocabulary set w can be estimated by maximizing the total likelihood of their corresponding relevant observation sequences generated by themselves:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Topic Model (WTM)", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "( ) ( ) ( ) W T M , W T M W T M M M , i w j j j j j j i w j c w O w w i w w w w O L P O P w \u2208 \u2208 \u2208 = = \u220f \u220f \u220f w w", "eq_num": "(13)" } ], "section": "Word Topic Model (WTM)", "sec_num": "2.3" }, { "text": "Then, the parameters of each WTM model can be estimated using the following EM updating formulae:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Topic Model (WTM)", "sec_num": "2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "-E (Expectation) Step ( ) ( ) ( ) ( ) ( ) , ' ' \u2211 = k j j j T w k k i w k k i w i k T P T w P T P T w P w T P (14) -M (Maximization) Step ( ) ( ) ( ) ( ) ( ) , | , M | , , | , M j j j l l l n w l i w k i w w i k n w k n w w w O c w O P T w P w T c w O P T w \u2208 \u2208 \u2208 = \u2211 \u2211 \u2211 w w (15) ( ) ( ) ( ) ( ) ' , | , M | M . ', j wj j wj k w w O k w wj w c w O P T w P T c w O \u2208 = \u2211 \u2211", "eq_num": "(16)" } ], "section": "Word Topic Model (WTM)", "sec_num": "2.3" }, { "text": "Along a similar vein to the LDA model, word Dirichlet topic model (WDTM) can be derived as well. WDTM essentially has the same ranking formula as WTM, except that it further assumes the model parameters are governed by some Dirichlet distributions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Topic Model (WTM)", "sec_num": "2.3" }, { "text": "DTM (PLSA or LDA) and WTM (WTM or WDTM) can be analyzed from several perspectives. First, DTM models the co-occurrence relationship between words and documents, while WTM models the co-occurrence relationship between words in the collection. More explicitly, we may compare DTM and WTM through nonnegative (or probabilistic) matrix factorizations, as depicted in Figure 1 . For DTM models, each column of Matrix A denotes the probability vector of a document in the collection, which offers a probability for every word occurring in the document. For WTM models, each column of Matrix B is the probability vector of a word's vicinity, which offers a probability for observing every other word occurring in its vicinity. Both Matrices A and B can be decomposed into two matrices standing for the topic mixture components and the topic mixture weights, respectively. Furthermore, the topic mixture weights of DTM for a new document have to be estimated online using EM or other more sophisticated algorithms, which would be time-consuming; on the contrary, the topic mixture weights of WTM for a new document D can be obtained on the basis of the topic mixture weights of all words involved in the document without using a complex inference procedure.", "cite_spans": [], "ref_spans": [ { "start": 363, "end": 371, "text": "Figure 1", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Analytic Comparisons between DTM and WTM", "sec_num": "2.4" }, { "text": "Finally, if the context window for modeling the vicinity information of WTM is reduced to one word ( 1 S = ), WTM can be either degenerated to a unigram model as the latent topic number K is set to 1, or viewed as analogous to a bigram model (as K V = ) or an aggregate", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analytic Comparisons between DTM and WTM", "sec_num": "2.4" }, { "text": "Markov model (as 1 K V < < ). Thus, with some appropriate values of S and K being chosen, we can show that WTM seems to be a good method of approximating the bigram or skip-bigram models for sparse data (Chen, 2009) .", "cite_spans": [ { "start": 203, "end": 215, "text": "(Chen, 2009)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Analytic Comparisons between DTM and WTM", "sec_num": "2.4" }, { "text": "As mentioned in the previous section, DTM and WTM are different from each other in their fundamental premises to determine a hidden topical decomposition of the document collection through the exploration of the topical information underlying the \"word-document\" or \"word-word\" co-occurrence relationships, respectively. Thus, we may fuse the results of the two different topical decompositions from DTM and WTM together for better ranking of spoken documents. \"word-word\" co-occurrence matrix One possible method is to train each of these two models individually and linearly combine their respective document-ranking scores in the log-likelihood domain subsequently (called \"Individual Topics\" hereafter). Nevertheless, this approach could not arrive at the same set of topic components (i.e., ( )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid of DTM and WTM", "sec_num": "3.1" }, { "text": "i k P w T , 1, , k K = \u2026", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid of DTM and WTM", "sec_num": "3.1" }, { "text": ") that are potentially associated with the spoken document collection. Alternatively, we may seek to conduct a single (or unique) topical decomposition of the spoken document collection by simultaneously exploiting these two types of co-occurrence relationships (called \"Shared Topics\" hereafter). This approach tries to estimate the DTM and WTM model parameters by jointly maximizing the total likelihood of words occurring in the spoken documents and the total likelihood of the words occurring in the vicinities of arbitrary words in the vocabulary. A pictorial representation for the probabilistic matrix decomposition of the spoken document collection with this approach is illustrated in Figure 2 , where each column of the left hand side matrix denotes either the probability vector of a document in the collection, which offers a probability for every word occurring in the document (i.e., DTM), or the probability vector of the vicinity of a word in the vocabulary, which offers a probability for observing every other word occurring in the vicinity (i.e., WTM). Then, this matrix can be decomposed into two matrices standing for the topic mixture components (i.e., F ) and the topic mixture weights (i.e., H and ' Q ), respectively.", "cite_spans": [], "ref_spans": [ { "start": 694, "end": 702, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Hybrid of DTM and WTM", "sec_num": "3.1" }, { "text": "In this paper, we also investigate leveraging subword-level information cues for topic modeling in Chinese SDR. To do this, syllable pairs are taken as basic units for indexing instead of words. In the following paragraphs, we will elucidate the reasons for using syllable-level features for the retrieval purpose before describing how they can be integrated into the DTM and WTM models. Mandarin Chinese is phonologically compact; an inventory of about 400 base syllables provides full phonological coverage of Mandarin audio if the differences in tones are disregarded. On the other hand, an inventory of about 13,000 characters provides full textual coverage of written Chinese. Each word is composed of one or more characters, and each character is pronounced as a monosyllable and is a morpheme with its own meaning. As a result, new words are generated easily by combining a few characters. Such new words also include many proper nouns, like personal names, organization names, and domain-specific terms. The construction of words from characters is often quite flexible. One phenomenon is that different words describing the same or similar concepts can be constructed of slightly different characters. Another phenomenon is that a longer word can be arbitrarily abbreviated into a shorter word. Moreover, there is a many-to-many mapping between characters and syllables; a foreign word can be translated into different Chinese words based on its pronunciation, while different translations usually have some syllables in common, or may have exactly the same syllables. Statistical evidence also shows that, in the Chinese language, about 91% of the top 5,000 most frequently used polysyllabic words are bi-syllabic, i.e., they are pronounced as a segment of two syllables. Therefore, such syllable segments (or syllable pairs) definitely carry a plurality of linguistic information and make great sense to be used as important index terms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic Modeling with Subword-level Units", "sec_num": "3.2" }, { "text": "The characteristics of the Chinese language mentioned above lead to some special considerations for SDR. Word-level index features possess more semantic information than syllable-level ones; thus, word-based retrieval enhances the precision. On the other hand, syllable-level index features are more robust against the Chinese word tokenization ambiguity, Chinese homophone ambiguity, open vocabulary problem, and speech recognition errors; therefore, the syllable-level information would enhance the recall. Accordingly, there is good reason to fuse the information obtained from index features of different levels. It has been shown that using syllable pairs as the index terms is very effective for Chinese SDR, and the retrieval performance can be further improved by incorporating the information from word-level index features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Topic Modeling with Subword-level Units", "sec_num": "3.2" }, { "text": "In this paper, both the manual transcript and the recognition transcript of each spoken document, in the form of a word stream, were automatically converted into a stream of overlapping syllable pairs. Then, all of the distinct syllable pairs occurring in the spoken document collection were identified to form an indexing vocabulary of syllable pairs. Topic modeling with the syllable-level information can be fulfilled in two ways. One is to simply use syllable pairs, as a replacement for words, to represent the spoken documents and to construct the associated probabilistic latent topic distributions for DTM and WTM accordingly. The other is to jointly utilize both words and syllable pairs, as two types of index terms, to represent the spoken documents, as well as to construct the associated probabilistic latent topic distributions. To this end, each spoken document is represented virtually with a spliced text stream, consisting of both words and syllable pairs. Figure 3 takes DTM as an example to graphically illustrate such an attempt, which is expected to discover correlated topic patterns of the spoken document collection when using both word-and syllable-level index features simultaneously. ", "cite_spans": [], "ref_spans": [ { "start": 975, "end": 983, "text": "Figure 3", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Topic Modeling with Subword-level Units", "sec_num": "3.2" }, { "text": "We used the Topic Detection and Tracking (TDT-2) collection for the SDR task (LDC, 2000) . TDT is a DARPA sponsored program where participating sites tackle tasks, such as identifying the first time a news story is reported on a given topic or grouping news stories with similar topics from audio and textual streams of newswire data. Both the English and Mandarin Chinese corpora have been studied in the recent past. The TDT corpora have also been used for cross-language spoken document retrieval (CLSDR) in the Mandarin English Information (MEI) Project (Meng et al., 2004) . In this paper, we used the Mandarin Chinese collections of the TDT corpora for the retrospective retrieval task, such that the statistics for the entire document collection was obtainable. Chinese text news stories from Xinhua News Agency were compiled to form the test queries (or query exemplars). More specifically, in the following experiments, we will either use a whole text news story as \"long\" query or merely extract the title field from a text news story to form a relatively \"short\" query.", "cite_spans": [ { "start": 77, "end": 88, "text": "(LDC, 2000)", "ref_id": null }, { "start": 558, "end": 577, "text": "(Meng et al., 2004)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Corpus and Evaluation Metric", "sec_num": "4.1" }, { "text": "The Mandarin news stories (audio) from Voice of America news broadcasts were used as the spoken documents. All news stories were exhaustively tagged with event-based topic labels, which merely serve as the relevance judgments for performance evaluation and will not be utilized in the training of topic models (cf. Section 2). Table 1 shows some basic statistics about the corpus used in this paper. The Dragon large-vocabulary continuous speech Spoken Document Retrieval recognizer provided Chinese word transcripts for our Mandarin audio collections. To assess the performance level of the recognizer, we spot-checked a fraction of the spoken document collection set (about 40 hours), and obtained error rates of 35.38% (in word), 17.69% (in character), and 13.00% (in syllable). Since Dragon's lexicon is not available, we augmented the LDC Mandarin Chinese Lexicon with 24,000 words extracted from Dragon's word recognition output, and used the augmented LDC lexicon (about 51,000 words) to tokenize the manual transcripts for computing error rates. We also used this augmented LDC lexicon to tokenize the text queries in the retrieval experiments. The retrieval results are expressed in terms of non-interpolated mean average precision (mAP) following the TREC evaluation (Harman, 1995) , which is computed by the following equation:", "cite_spans": [ { "start": 1277, "end": 1291, "text": "(Harman, 1995)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 327, "end": 334, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Corpus and Evaluation Metric", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 1 , 1 1 mAP , i N L i j i i j j L N r = = = \u2211 \u2211", "eq_num": "(17)" } ], "section": "Corpus and Evaluation Metric", "sec_num": "4.1" }, { "text": "where L is the number of test queries, i N is the total number of documents that are relevant to query i Q , and , i j r is the position (rank) of the j-th document that is relevant to query i Q , counting down from the top of the ranked list.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus and Evaluation Metric", "sec_num": "4.1" }, { "text": "Topic models, such as DTM and WTM, introduce a set of latent topics to cluster concept-related words and match a query with a document at the level of these word clusters. Although document ranking based merely on DTM or WTM tends to increase recall, using just one of them is liable to hurt the precision for SDR. Specifically, they offer coarse-grained concept clues about the document collection at the expense of losing discriminative power among concept-related words in finer granularity. Therefore, in this paper, when either DTM or WTM was employed in evaluating the relevance between a query Q and a document D , we additionally incorporated the unigram probabilities of a query word (or term) occurring in the document ( )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Implementation", "sec_num": "4.2" }, { "text": "ULM | M i D P w and a general text corpus ( ) ULM M i C P w | with the topic model ( ) Topic M i D P w", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Implementation", "sec_num": "4.2" }, { "text": "(either DTM or WTM), for probability smoothing and better performance. For example, the probability of a query word generated by one specific topic model of a document (cf. (4), (10), and (12)) was modified as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Implementation", "sec_num": "4.2" }, { "text": "( ) ( ) ( ) ( ) ( ) ( ) Topic ULM ULM M 1 M 1 M i i D i D i C P w D P w P w P w \u03b1 \u03b2 \u03b2 \u03b1 \u23a1 \u23a4 = \u22c5 \u22c5 + \u2212 \u22c5 \u23a3 \u23a6 + \u2212 \u22c5 (18) where ( ) Topic M i D P w", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Implementation", "sec_num": "4.2" }, { "text": "can be the probability of a word i w generated by PLSA or LDA (cf.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Implementation", "sec_num": "4.2" }, { "text": "(4) or (10)) or WTM (cf. 12); the values of the interpolation weights \u03b1 and \u03b2 can be empirically set or further optimized by other optimization techniques (Zhai, 2008) . A detailed account of this issue will be given in Section 5.2. On the other hand, the Gibbs sampling algorithm (Griffiths, 2004) is used to infer the parameters of LDA and WDTM.", "cite_spans": [ { "start": 155, "end": 167, "text": "(Zhai, 2008)", "ref_id": "BIBREF22" }, { "start": 281, "end": 298, "text": "(Griffiths, 2004)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Model Implementation", "sec_num": "4.2" }, { "text": "The baseline retrieval results obtained by the ULM model are shown in Table 2 . The retrieval results, assuming manual transcripts for the spoken documents to be retrieved (denoted TD, text documents) are known, are listed for reference and are compared to the results when only erroneous recognition transcripts generated by speech recognition are available (denoted SD, spoken documents). As can be seen, the performance gap between the TD and SD cases was about 7% absolute in terms of mAP when using either long or short queries, although the word error rate (WER) for the spoken document collection was higher than 35%. On the other hand, retrieval using short queries degraded the performance approximately 45% relative to retrieval using long queries. This is due to the fact that a long query usually contains a variety of words describing similar concepts. Even though some of these words might not be correctly transcribed in the relevant spoken documents, they, in the ensemble, still provide plenty of clues for literal term matching. From now on, unless otherwise stated, we will only report the retrieval results for the SD case. ", "cite_spans": [], "ref_spans": [ { "start": 70, "end": 77, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Baseline Experiments", "sec_num": "5.1" }, { "text": "In the next set of experiments, we assessed the utility of various topic models for SDR, including PLSA, LDA, and WTM, as well as WDTM. The corresponding retrieval results are shown in Table 3 . It is worth mentioning that all of these topic models were trained without supervision and had the same number of latent topics, which was set to 32 in this study. A detailed analysis for the impact of the model complexity of PLSA and WTM on SDR performance can be found in Chen (2009) . On the other hand, both WTM and WDTM had the same context window size S set to 21. Since this project set out to investigate the effectiveness of various topic models for SDR, the interpolation weights \u03b1 and \u03b2 defined in (18) were optimized for each respective topic model with a two-dimensional grid search over the range from 0 to 1 and in increments of 0.1. Consulting Table 3 , we find that all of these topic models give moderate but consistent improvement over the baseline ULM model when long queries are evaluated. One possible explanation is that the information need already might have been stated fully in a long query, whereas additional incorporation of the topical information into the document language model does not seem to offer many extra clues for document ranking. On the contrary, the retrieval performance receives great boosts from the additional use of the topical information when the queries are short. This implies that incorporating the topical information with the literal term information for document modeling is especially useful when the query is inadequate to address the information need. We then turned our attention to compare the following topic models. 1) LDA outperforms PLSA, and WDTM outperforms WTM. This finding supports the argument that constraining the latent topic distributions with Dirichlet priors will lead to better model estimation. 2) LDA is the best among these topic models. As compared to the baseline ULM model, it yielded about 5% and 39% relative improvements for long and short queries, respectively. Moreover, we investigated the effectiveness of the fusion of DTM and WTM to the retrieval performance (cf., the last two rows of Table 3 ). Here, we took LDA and WDTM as the training example since they achieved better retrieval performance in the previous experiment. It is also worth mentioning that the row \"LDA+WDTM (Individual Topics)\" shown in Table 3 indicates that each topic model was trained individually and their respective document-ranking scores were combined in the log-likelihood domain. On the contrary, the row \"LDA+WDTM (Shared Topics)\" in Table 3 denotes the hybrid of DTM and WTM in both model training and testing (cf. Section 3.1). As is evident, the fusion of LDA and WDTM (i.e., with either individual sets of topics or a shared set of topics) is beneficial to the retrieval performance. This provides an additional 1% absolute improvement for the case of using short queries, as compared to that using LDA alone. Nevertheless, the joint exploration of \"word-document\" and \"word-word\" latent topic information (i.e., with a shared set of topics) in the training phrase does not provide any added benefit compared to the results obtained by training LDA and WDTM individually (i.e., with individual sets of topics). This is an interesting phenomenon and awaits further exploration. Readers may refer to Chen, et al. (2010) for an attempt that applies a similar idea to the speech recognition task.", "cite_spans": [ { "start": 469, "end": 480, "text": "Chen (2009)", "ref_id": "BIBREF4" }, { "start": 3373, "end": 3392, "text": "Chen, et al. (2010)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 185, "end": 192, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 855, "end": 862, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 2176, "end": 2183, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 2396, "end": 2403, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 2605, "end": 2612, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Experiments on DTM and WTM", "sec_num": "5.2" }, { "text": "To go a step further, we attempted to investigate the more subtle interaction effects among the topic model Figure 4 , the retrieval model is based merely on the topical information, which has poor retrieval performance, especially for the case using long queries. One possible reason is that a long query may contain several common non-informative words and using the topical information alone will let the query become biased away from representing the true theme of the information need, probably due to these non-informative words. This argument again can be verified by examining the rightmost columns of Figure 4 , where using the background model", "cite_spans": [], "ref_spans": [ { "start": 108, "end": 116, "text": "Figure 4", "ref_id": "FIGREF9" }, { "start": 610, "end": 618, "text": "Figure 4", "ref_id": "FIGREF9" } ], "eq_spans": [], "section": "Experiments on DTM and WTM", "sec_num": "5.2" }, { "text": "( ) ULM M i C P w |", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments on DTM and WTM", "sec_num": "5.2" }, { "text": "can absorb the contributions of the common (or non-informative) words made to document ranking, thus giving better retrieval performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments on DTM and WTM", "sec_num": "5.2" }, { "text": "Looking at each row of Figure 4 , we see that smoothing LDA with the document model ( )", "cite_spans": [], "ref_spans": [ { "start": 23, "end": 31, "text": "Figure 4", "ref_id": "FIGREF9" } ], "eq_spans": [], "section": "Experiments on DTM and WTM", "sec_num": "5.2" }, { "text": "ULM | M i D P w", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments on DTM and WTM", "sec_num": "5.2" }, { "text": "is also useful. This is attributed to the fact that discriminative (or informative) words will occur repeatedly in a specific document;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments on DTM and WTM", "sec_num": "5.2" }, { "text": "( )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments on DTM and WTM", "sec_num": "5.2" }, { "text": "ULM | M i D P w", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments on DTM and WTM", "sec_num": "5.2" }, { "text": "gives more emphasis on these words. On the other hand, Figure 4 also reflects that smoothing LDA with the background model ( )", "cite_spans": [], "ref_spans": [ { "start": 55, "end": 63, "text": "Figure 4", "ref_id": "FIGREF9" } ], "eq_spans": [], "section": "Experiments on DTM and WTM", "sec_num": "5.2" }, { "text": "ULM M i C P w |", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments on DTM and WTM", "sec_num": "5.2" }, { "text": "is necessary when the query is long, but it does not seem to be helpful for the case of using a relatively short query. This is mainly because the ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments on DTM and WTM", "sec_num": "5.2" }, { "text": "In the fourth set of experiments, we evaluated the performance of the topic models when syllable pairs were utilized instead as the index terms. Here, we took LDA and WDTM as the example topic models, and the corresponding models are denoted by Syl_LDA and Syl_WDTM, respectively. The fusion of words and syllable pairs for topic modeling was investigated as well. Notice that Word_LDA denotes LDA using words as the index terms, which was termed LDA in the previous sections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments on using Subword-level Index features", "sec_num": "5.3" }, { "text": "The retrieval results of Syl_LDA and Syl_WDTM are shown in Table 4 , where the results achieved by ULM and using syllable pairs as the index terms (denoted by Syl_ULM) are also depicted for comparison. Several observations can be made from Table 4 . First, the topic models (Syl_LDA and Syl_WDTM) again are superior to the unigram language model when the syllable-level information is used in place of the word-level information (denoted by Syl_ULM). Syl_LDA results in absolute improvements of about 8% and 3% over Syl_ULM when evaluated using the long and short queries, respectively. Second, the topic models with the syllable-level information perform worse than those with the word-level information. This may be due simply to the fact that syllable pairs are not as good as words in representing the semantic content of the queries and the documents. Third, the fusion of the word-and syllable-information for topic modeling (each topic model was trained individually beforehand) demonstrates much better retrieval results (cf. the last two rows of Table 4 ) as compared to that of the topic models with merely the word-level information (cf. Table 3) . Finally, we examined the contributions made by modeling the correlated topic patterns of the spoken document collection when jointly using words and syllable pairs in the construction of the latent topic distributions. We took the LDA model as an example to study the effectiveness of such an attempt, and the associated results are shown in Table 5 . The results reveal that, when only syllable pairs are used as the index terms for the final document ranking, modeling the correlated topic patterns, namely, jointly using words and syllable pairs Spoken Document Retrieval in the construction of the latent topic distributions for LDA (denoted by Syl_LDA (Corr.)) is better than that only using syllable pairs to construct the latent topic distributions (denoted by Syl_LDA). On the other hand, such an attempt slightly hurts the performance of LDA using words for the final document ranking (denoted by Word_LDA (Corr.)). This phenomenon seems to be reasonable because the semantic meanings carried by words would probably see interference from syllable pairs when we attempt to splice these two distinct index term streams together for constructing the latent topic distributions of LDA. It can be observed that Syl_LDA (Corr.) significantly outperforms all other topic models in the case of using long queries (cf. Tables 3, 4, and 5). This demonstrates the potential benefit of using the syllable-level information in topic modeling for SDR if we can carefully delineate the syllable-level information. Nevertheless, in the case of using short queries, Syl_LDA (Corr.) does not perform as well as LDA using words as the index terms to construct the latent topic distributions (denoted by Word_LDA). We conjecture that one possible reason is that the topical information inherent in a short query cannot be unambiguously depicted with limited syllable pairs. In order to mitigate this deficiency, we combined Word_LDA with Syl_LDA (Corr.) to form a new retrieval model (denoted by Word_LDA + Syl_LDA (Corr.)), which yields the best results of 0.636 and 0.431 for long and short queries, respectively. One should keep in mind that these results were obtained using the erroneous speech transcripts of the spoken documents (i.e., the SD case). This also reveals that Word_LDA + Syl_LDA (Corr.) can make retrieval using the speech transcripts achieve almost the same performance as ULM using the manual transcripts (i.e., the TD case) when the queries are long, and can perform even better than the latter for short queries. ", "cite_spans": [], "ref_spans": [ { "start": 59, "end": 66, "text": "Table 4", "ref_id": "TABREF5" }, { "start": 240, "end": 247, "text": "Table 4", "ref_id": "TABREF5" }, { "start": 1055, "end": 1062, "text": "Table 4", "ref_id": "TABREF5" }, { "start": 1149, "end": 1157, "text": "Table 3)", "ref_id": "TABREF4" }, { "start": 1502, "end": 1509, "text": "Table 5", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Experiments on using Subword-level Index features", "sec_num": "5.3" }, { "text": "In this paper, we have investigated the utility of two categories of topic models, namely, the document topic models (DTM) and the word topic models (WTM), for SDR. Moreover, we have leveraged different levels of index features for topic modeling, including words, syllable pairs, and their combinations, so as to prevent the performance degradation facing most SDR tasks. The proposed models indeed demonstrated significant performance improvements over the baseline model on the Mandarin SDR task. Our future research directions include: 1) training the topic models in a lightly supervised manner through the exploration of users' click-through data, 2) investigating discriminative training of topic models, 3) integrating the topic models with the other more elaborate representations of the speech recognition output (Yi and Allan, 2009; Chelba et al., 2008) for larger-scale SDR tasks, and 4) utilizing speech summarization techniques to help estimate better document models and topic models.", "cite_spans": [ { "start": 823, "end": 843, "text": "(Yi and Allan, 2009;", "ref_id": null }, { "start": 844, "end": 864, "text": "Chelba et al., 2008)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6." } ], "back_matter": [ { "text": "This work was sponsored in part by \"Aim for the Top University Plan\" of National Taiwan Normal University and Ministry of Education, Taiwan, and the National Science Council, Taiwan, under Grants NSC 101-2221-E-003 -024 -MY3, NSC 99-2221-E-003-017-MY3, NSC 98-2221-E-003-011-MY3, NSC 100-2515-S-003-003, and NSC 99-2631-S-003-002.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Latent Dirichlet allocation", "authors": [ { "first": "D", "middle": [ "M" ], "last": "Blei", "suffix": "" }, { "first": "A", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "M", "middle": [ "I" ], "last": "Jordan", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "993--1022", "other_ids": {}, "num": null, "urls": [], "raw_text": "Blei, D.M., Ng, A.Y., & Jordan, M. I., (2003). Latent Dirichlet allocation. Journal of Machine Learning Research, 3, 993-1022.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Topic models", "authors": [ { "first": "D", "middle": [], "last": "Blei", "suffix": "" }, { "first": "J", "middle": [], "last": "Lafferty", "suffix": "" } ], "year": 2009, "venue": "Text Mining: Theory and Applications", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Blei, D. & Lafferty, J., (2009). Topic models. In A. Srivastava and M. Sahami, (eds.), Text Mining: Theory and Applications. Taylor and Francis, 2009.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Probabilistic topic models", "authors": [ { "first": "D", "middle": [], "last": "Blei", "suffix": "" }, { "first": "L", "middle": [], "last": "Carin", "suffix": "" }, { "first": "D", "middle": [], "last": "Dunson", "suffix": "" } ], "year": 2010, "venue": "IEEE Signal Processing Magazine", "volume": "27", "issue": "6", "pages": "55--65", "other_ids": {}, "num": null, "urls": [], "raw_text": "Blei, D., Carin, L., & Dunson, D., (2010). Probabilistic topic models. IEEE Signal Processing Magazine, 27(6), 55-65.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Retrieval and browsing of spoken content", "authors": [ { "first": "C", "middle": [], "last": "Chelba", "suffix": "" }, { "first": "T", "middle": [ "J" ], "last": "Hazen", "suffix": "" }, { "first": "M", "middle": [], "last": "Sarclar", "suffix": "" } ], "year": 2008, "venue": "IEEE Signal Processing Magazine", "volume": "25", "issue": "3", "pages": "39--49", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chelba, C., Hazen, T. J., & Sarclar, M., (2008). Retrieval and browsing of spoken content. IEEE Signal Processing Magazine, 25(3), 39-49.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Word topic models for spoken document retrieval and transcription", "authors": [ { "first": "B", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2009, "venue": "ACM Transactions on Asian Language Information Processing", "volume": "8", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, B., (2009). Word topic models for spoken document retrieval and transcription. ACM Transactions on Asian Language Information Processing, 8(1), Article 2.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A lattice-based approach to query-by-example spoken document retrieval", "authors": [ { "first": "T", "middle": [ "K" ], "last": "Chia", "suffix": "" }, { "first": "K", "middle": [ "C" ], "last": "Sim", "suffix": "" }, { "first": "H", "middle": [ "Z" ], "last": "Li", "suffix": "" }, { "first": "H", "middle": [ "T" ], "last": "Ng", "suffix": "" } ], "year": 2008, "venue": "Proceeding the ACM SIGIR Conference on R&D in Information Retrieval", "volume": "", "issue": "", "pages": "363--370", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chia, T. K., Sim, K. C, Li, H. Z. & Ng, H. T., (2008). A lattice-based approach to query-by-example spoken document retrieval. In Proceeding the ACM SIGIR Conference on R&D in Information Retrieval, 363-370.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Maximum likelihood from incomplete data via the EM algorithm", "authors": [ { "first": "A", "middle": [ "P" ], "last": "Dempster", "suffix": "" }, { "first": "N", "middle": [ "M" ], "last": "Laird", "suffix": "" }, { "first": "D", "middle": [ "B" ], "last": "Rubin", "suffix": "" } ], "year": 1977, "venue": "Journal of the Royal Statistical Society, Series B", "volume": "39", "issue": "1", "pages": "1--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dempster, A. P., Laird, N. M., & Rubin, D. B., (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39 (1): 1-38.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The TREC spoken document retrieval track: A success story", "authors": [ { "first": "J", "middle": [], "last": "Garofolo", "suffix": "" }, { "first": "G", "middle": [], "last": "Auzanne", "suffix": "" }, { "first": "E", "middle": [], "last": "Voorhees", "suffix": "" } ], "year": 2000, "venue": "Proceeding the 8th Text REtrieval Conference. NIST", "volume": "", "issue": "", "pages": "107--129", "other_ids": {}, "num": null, "urls": [], "raw_text": "Garofolo, J., Auzanne, G., & Voorhees, E., (2000). The TREC spoken document retrieval track: A success story. In Proceeding the 8th Text REtrieval Conference. NIST, 107-129.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Finding scientific topics", "authors": [ { "first": "T", "middle": [ "L" ], "last": "Griffiths", "suffix": "" }, { "first": "M", "middle": [], "last": "Steyvers", "suffix": "" } ], "year": 2004, "venue": "Proceeding of the National Academy of Sciences", "volume": "", "issue": "", "pages": "5228--5235", "other_ids": {}, "num": null, "urls": [], "raw_text": "Griffiths, T. L. & Steyvers, M., (2004). Finding scientific topics. In Proceeding of the National Academy of Sciences, 5228-5235.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Topics in semantic representation", "authors": [ { "first": "T", "middle": [ "L" ], "last": "Griffiths", "suffix": "" }, { "first": "M", "middle": [], "last": "Steyvers", "suffix": "" }, { "first": "J", "middle": [ "B" ], "last": "Tenenbaum", "suffix": "" } ], "year": 2007, "venue": "Psychological Review", "volume": "114", "issue": "", "pages": "211--244", "other_ids": {}, "num": null, "urls": [], "raw_text": "Griffiths, T. L., Steyvers, M. & Tenenbaum, J. B., (2007). Topics in semantic representation. Psychological Review, 114, 211-244.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Spoken Document Retrieval", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Spoken Document Retrieval", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Overview of the Fourth Text Retrieval Conference (TREC-4)", "authors": [ { "first": "D", "middle": [], "last": "Harman", "suffix": "" } ], "year": 1995, "venue": "Proceeding the Fourth Text Retrieval Conference", "volume": "", "issue": "", "pages": "1--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harman D., (1995). Overview of the Fourth Text Retrieval Conference (TREC-4). In Proceeding the Fourth Text Retrieval Conference, 1-23.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Unsupervised learning by probabilistic latent semantic analysis", "authors": [ { "first": "T", "middle": [], "last": "Hofmann", "suffix": "" } ], "year": 2001, "venue": "Machine Learning", "volume": "42", "issue": "", "pages": "177--196", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hofmann, T., (2001). Unsupervised learning by probabilistic latent semantic analysis. Machine Learning, 42, 177-196.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Project topic detection and tracking", "authors": [], "year": 2000, "venue": "Linguistic Data Consortium", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "LDC, (2000). Project topic detection and tracking. Linguistic Data Consortium. http://www.ldc.upenn.edu/Projects/TDT/.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Spoken document understanding and organization", "authors": [ { "first": "L", "middle": [ "S" ], "last": "Lee", "suffix": "" }, { "first": "B", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2005, "venue": "IEEE Signal Processing Magazine", "volume": "22", "issue": "5", "pages": "42--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lee, L. S. & Chen B., (2005). Spoken document understanding and organization. IEEE Signal Processing Magazine, 22(5), 42-60.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Mandarin-English information (MEI): investigating translingual speech retrieval", "authors": [ { "first": "H", "middle": [], "last": "Meng", "suffix": "" }, { "first": "B", "middle": [], "last": "Chen", "suffix": "" }, { "first": "S", "middle": [], "last": "Khudanpur", "suffix": "" }, { "first": "G", "middle": [ "A" ], "last": "Levow", "suffix": "" }, { "first": "W", "middle": [ "K" ], "last": "Lo", "suffix": "" }, { "first": "D", "middle": [], "last": "Oard", "suffix": "" }, { "first": "P", "middle": [], "last": "Schone", "suffix": "" }, { "first": "K", "middle": [], "last": "Tang", "suffix": "" }, { "first": "H", "middle": [ "M" ], "last": "Wang", "suffix": "" }, { "first": "J", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2004, "venue": "Computer Speech and Language", "volume": "18", "issue": "2", "pages": "163--179", "other_ids": {}, "num": null, "urls": [], "raw_text": "Meng, H., Chen, B., Khudanpur, S., Levow, G. A., Lo, W. K., Oard, D., Schone, P., Tang, K., Wang, H. M., & Wang, J., (2004). Mandarin-English information (MEI): investigating translingual speech retrieval. Computer Speech and Language, 18(2), 163-179.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A hidden Markov model information retrieval system", "authors": [ { "first": "D", "middle": [ "R H" ], "last": "Miller", "suffix": "" }, { "first": "T", "middle": [], "last": "Leek", "suffix": "" }, { "first": "R", "middle": [], "last": "Schwartz", "suffix": "" } ], "year": 1999, "venue": "Proceeding ACM SIGIR Conference on R&D in Information Retrieval", "volume": "", "issue": "", "pages": "214--221", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miller, D. R. H., Leek, T., & Schwartz, R., (1999). A hidden Markov model information retrieval system. In Proceeding ACM SIGIR Conference on R&D in Information Retrieval, 214-221.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A language modeling approach to information retrieval", "authors": [ { "first": "J", "middle": [ "M" ], "last": "Ponte", "suffix": "" }, { "first": "W", "middle": [ "B" ], "last": "Croft", "suffix": "" } ], "year": 1998, "venue": "Proceeding the ACM SIGIR Conference on R&D in Information Retrieval", "volume": "", "issue": "", "pages": "275--281", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ponte, J. M. & Croft, W. B., (1998). A language modeling approach to information retrieval. In Proceeding the ACM SIGIR Conference on R&D in Information Retrieval, 275-281.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "LDA-based document models for ad-hoc retrieval", "authors": [ { "first": "X", "middle": [], "last": "Wei", "suffix": "" }, { "first": "W", "middle": [ "B" ], "last": "Croft", "suffix": "" } ], "year": 2006, "venue": "Proceeding the ACM SIGIR Conference on R&D in Information Retrieval", "volume": "", "issue": "", "pages": "178--185", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei, X., & Croft, W. B., (2006). LDA-based document models for ad-hoc retrieval. In Proceeding the ACM SIGIR Conference on R&D in Information Retrieval, 178-185.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Topic modeling for spoken document retrieval using word-and syllable-level information", "authors": [ { "first": "S", "middle": [ "H" ], "last": "Lin", "suffix": "" }, { "first": "B", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the third workshop on Searching spontaneous conversational speech", "volume": "", "issue": "", "pages": "3--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, S. H. & Chen B., (2009). Topic modeling for spoken document retrieval using word-and syllable-level information. In Proceedings of the third workshop on Searching spontaneous conversational speech, 3-10.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Latent topic modeling of word vicinity information for speech recognition", "authors": [ { "first": "K", "middle": [ "Y" ], "last": "Chen", "suffix": "" }, { "first": "H", "middle": [ "S" ], "last": "Chiu", "suffix": "" }, { "first": "B", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2010, "venue": "Proceeding of the 35th IEEE International Conference on Acoustics, Speech, and Signal Processing", "volume": "", "issue": "", "pages": "5394--5397", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, K. Y., Chiu, H. S. & Chen B., (2010). Latent topic modeling of word vicinity information for speech recognition. In Proceeding of the 35th IEEE International Conference on Acoustics, Speech, and Signal Processing, 5394-5397.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Expectation-propagation for the generative aspect model", "authors": [ { "first": "J", "middle": [], "last": "Ypma", "suffix": "" }, { "first": "T", "middle": [], "last": "Basten", "suffix": "" }, { "first": "J", "middle": [], "last": "Lafferty", "suffix": "" } ], "year": 2002, "venue": "Proceeding Conference on Uncertainty in Artificial Intelligence", "volume": "", "issue": "", "pages": "352--359", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ypma, J., Basten, T. & Lafferty, J., (2002). Expectation-propagation for the generative aspect model. In Proceeding Conference on Uncertainty in Artificial Intelligence, 352-359.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Statistical language models for information retrieval", "authors": [ { "first": "C", "middle": [ "X" ], "last": "Zhai", "suffix": "" } ], "year": 2008, "venue": "Synthesis Lectures Series on Human Language Technologies)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhai, C. X., (2008). Statistical language models for information retrieval (Synthesis Lectures Series on Human Language Technologies). Morgan & Claypool Publishers.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": "Lin and Berlin Chen suggests possible avenues for future work." }, "FIGREF1": { "num": null, "type_str": "figure", "uris": null, "text": "is the number of times that word i w occurs in the document D and D" }, "FIGREF3": { "num": null, "type_str": "figure", "uris": null, "text": "A schematic illustration for the matrix factorizations of DTM and WTM." }, "FIGREF5": { "num": null, "type_str": "figure", "uris": null, "text": "A schematic illustration for the matrix factorization of DTM, jointly using words and syllable pairs as the index terms." }, "FIGREF7": { "num": null, "type_str": "figure", "uris": null, "text": "by varying the values of the interpolation weights \u03b1 and \u03b2 . Here, LDA was taken as an example topic model since it exhibits the best performance among the topic models compared in this paper. The retrieval results are graphically illustrated inFigure 4, where the horizontal and vertical axes denote the values of \u03b1 and \u03b2 , respectively. As seen in the results revealed inFigure 4beneficial for retrieval. In an extreme case, when both the values of \u03b1 and \u03b2 are set to one, as shown in the top right corner of" }, "FIGREF8": { "num": null, "type_str": "figure", "uris": null, "text": "Comparative Study of Methods for Topic Modeling in 81Spoken Document Retrieval information need stated by the short query is already concise, and the importance of the role that out or deemphasizing common (or non-informative) words is less pronounced." }, "FIGREF9": { "num": null, "type_str": "figure", "uris": null, "text": "Detailed spoken document retrieval results achieved by LDA with respect to different types of queries." }, "TABREF2": { "num": null, "type_str": "table", "html": null, "content": "
# Spoken documents2,265 stories 46.03 hours of audio
# Distinct test queries16 Xinhua text stories (Topics 20001\u223c20096)
Min.Max.Med.Mean
Document length (in characters)234841153287
Length of long query (in characters)1832623329533
Length of short query (in characters)8271314
# Relevant documents per test query2951329
", "text": "" }, "TABREF3": { "num": null, "type_str": "table", "html": null, "content": "
Query TypeTDSD
Long0.6390.562
Short0.3700.293
", "text": "" }, "TABREF4": { "num": null, "type_str": "table", "html": null, "content": "
MethodLong QueryShort Query
ULM0.5620.293
PLSA0.5690.374
LDA0.5900.407
WTM0.5730.351
WDTM0.5740.377
LDA+WDTM (Individual Topics)0.5920.418
LDA+WDTM (Shared Topics)0.5950.415
", "text": "" }, "TABREF5": { "num": null, "type_str": "table", "html": null, "content": "
MethodLong QueryShort Query
Syl_ULM0.4920.274
Syl_LDA0.5710.302
Syl_WDTM0.5360.299
Word_LDA+Syl_LDA0.6130.412
Word_WDTM+Syl_WDTM0.5750.383
", "text": "" }, "TABREF6": { "num": null, "type_str": "table", "html": null, "content": "
MethodLong QueryShort Query
Word_LDA (Corr.)0.5770.349
Syl_LDA (Corr.)0.6180.356
Word_LDA+Syl_LDA (Corr.)0.6360.431
", "text": "" } } } }