Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "O04-1007",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:00:53.953646Z"
},
"title": "Finding Relevant Concepts for Unknown Terms Using a Web-based Approach",
"authors": [
{
"first": "Chen-Ming",
"middle": [],
"last": "Hung",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Lee-Feng",
"middle": [],
"last": "Chien",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Taiwan University Taipei",
"location": {
"country": "Taiwan"
}
},
"email": "lfchien@iis.sinica.edu.tw"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Previous research on automatic thesaurus construction most focused on extracting relevant terms for each term of concern from a small-scale and domain-specific corpus. This study emphasizes on utilizing the Web as the rich and dynamic corpus source for term association estimation. In addition to extracting relevant terms, we are interested in finding concept-level information for each term of concern. For a single term, our idea is that to send it into Web search engines to retrieve its relevant documents and we propose a Greedy-EMbased document clustering algorithm to cluster them and determine an appropriate number of relevant concepts for the term. Then the keywords with the highest weighted log likelihood ratio in each cluster are treated as the label(s) of the associated concept cluster for the term of concern. With some initial experiments, the proposed approach has been shown its potential in finding relevant concepts for unknown terms.",
"pdf_parse": {
"paper_id": "O04-1007",
"_pdf_hash": "",
"abstract": [
{
"text": "Previous research on automatic thesaurus construction most focused on extracting relevant terms for each term of concern from a small-scale and domain-specific corpus. This study emphasizes on utilizing the Web as the rich and dynamic corpus source for term association estimation. In addition to extracting relevant terms, we are interested in finding concept-level information for each term of concern. For a single term, our idea is that to send it into Web search engines to retrieve its relevant documents and we propose a Greedy-EMbased document clustering algorithm to cluster them and determine an appropriate number of relevant concepts for the term. Then the keywords with the highest weighted log likelihood ratio in each cluster are treated as the label(s) of the associated concept cluster for the term of concern. With some initial experiments, the proposed approach has been shown its potential in finding relevant concepts for unknown terms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "It has been well recognized that a thesaurus is crucial for representing vocabulary knowledge and helping users to reformulate queries in information retrieval systems. One of the important functions of a thesaurus is to provide the information of term associations for information retrieval systems. Previous research on automatic thesaurus construction most focused on extracting relevant terms for each term of concern from a small-scale and domain-specific corpus. In this study, there are several differences extended from the previous research. First, this study emphasizes on utilizing the Web as the rich and dynamic corpus source for term association estimation. Second, the thesaurus to be constructed has no domain limitation and is pursued to be able to benefit Web information retrieval, e.g. to help users disambiguate their search interests, when users gave poor or short queries. Third, in addition to extracting relevant terms, in this study we are interested in finding concept-level information for each term of concern. For example, for a term \"National Taiwan University\" given by a user, it might contain some different but relevant concepts from users' point of view, such as \"main page of National Taiwan University\", \"entrance examination of NTU\", \"the Hospital of NTU\", etc. The purpose of this paper is, therefore, to develop an efficient approach to deal with the above problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "In information retrieval researching area, extracting concepts contained in one text always plays a key role. However, in traditional way, if the text is too short, it is almost impossible to get enough information to extract the contained concepts. In this paper, utilizing the abundant corpora on the World Wide Web, we attempt to find the concepts contained in arbitrary length of topic-specific texts, even only a single term. For a single term, our idea is that to send it into Web search engines to retrieve its relevant documents, and a Greedy-EMbased document clustering algorithm is developed to cluster these documents into an appropriate number of concept clusters, with the similarity of the documents. Then the terms with the highest weighted log likelihood ratio in each clustered document group are treated as the label(s) of the associated concept cluster for the term of concern. To cluster the extracted documents into an unknown number of concept mixtures is important, because it is hard to know an exact number of concepts should be contained in a single term.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "Compared with general text documents, a single term is much shorter and typically do not contain enough information to extract adequate and reliable features. To assist the relevance judgment between short terms, additional knowledge sources would be exploited. Our basic idea is to exploit the Web. Adequate contexts of a single term, e.g., the neighboring sentences of the term, can be extracted from large amounts of Web pages. We found that it is convenient to implement our idea using the existent search engines. A single term could be treated as a query with a certain search request. And its contexts are then obtained directly from the highly ranked search-result snippets, e.g., the titles and descriptions of search-result entries, and the texts surrounding matched terms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "The proposed approach relies on an efficient document clustering technique. Usually, in document clustering techniques [1, 3, 4, 8] , each text in a training set is transformed to a certain vector space, then begin agglomerated with another one text step by step depending on their cosine similarity [9] . Thus, a proper transformation from text to vector space, like TFIDF [9] , takes the heavy duty of classification accuracy or concept extraction result. However, a good transformation, i.e. feature extraction, needs a well-labeled training data to support; this is not such an easy task in real world. The idea of this paper is to modify the vector space transformation as probabilistic framework.",
"cite_spans": [
{
"start": 119,
"end": 122,
"text": "[1,",
"ref_id": "BIBREF0"
},
{
"start": 123,
"end": 125,
"text": "3,",
"ref_id": "BIBREF2"
},
{
"start": 126,
"end": 128,
"text": "4,",
"ref_id": "BIBREF3"
},
{
"start": 129,
"end": 131,
"text": "8]",
"ref_id": "BIBREF7"
},
{
"start": 300,
"end": 303,
"text": "[9]",
"ref_id": "BIBREF8"
},
{
"start": 374,
"end": 377,
"text": "[9]",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "With the extracted training data from the web, the Greedy EM algorithm [5, 7] is applied in this paper to automatically determine an appropriate number of concepts contained in the given single term through clustering the training documents. This is important while doing relevant concept extraction; otherwise, the number of concepts has to be assumed previously, it is difficult and impractical in real world. After clustering the training documents extracted from the Web into a certain number of mixtures, for each mixture, the representation of this mixture is straightforwardly defined as the term with the highest weighted log likelihood ratio in this mixture. With some initial experiments, the proposed approach has been shown its potential in finding relevant concepts for terms of concern.",
"cite_spans": [
{
"start": 71,
"end": 74,
"text": "[5,",
"ref_id": "BIBREF4"
},
{
"start": 75,
"end": 77,
"text": "7]",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "The remainder of the paper is organized as follows. Section 2 briefly describes the background assumption, i.e. Na\u00efve Bayes, and the modeling based on Naive Bayes. Section 3 describes the overall proposed approach in this paper, including the main idea of the greedy EM algorithm and its application to decide the number of concept domains contained in the training data from the web; in addition, generates keywords via comparing the weighted log likelihood ratio. Section 4 shows the experiments and their result. The summary and our future work are described in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "Before introducing our proposed approach, here introduce a well known way of text representation, i.e., Naive Bayes assumption. Naive Bayes assumption is a particular probabilistic generative model for text. First, introduce some notation about text representation. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NA\u00cfVE BAYES ASSUMPTION AND DOCUMENT CLASSIFICATION",
"sec_num": "2."
},
{
"text": "A document, i d , is considered to be an ordered list of words, ,1 ,2 ,| | { , , , } i i i d i d d d w w w ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NA\u00cfVE BAYES ASSUMPTION AND DOCUMENT CLASSIFICATION",
"sec_num": "2."
},
{
"text": "Furthermore, for each topic class k C of concern, we can express the probability of a document as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NA\u00cfVE BAYES ASSUMPTION AND DOCUMENT CLASSIFICATION",
"sec_num": "2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p d C p w w w C p w C w z j \u03b8 \u03b8 \u03b8 = = < > = < \u220f (2) , , , ( | , , , ) ( | , ) k k i j i z i j d d d p w C w z j p w C \u03b8 \u03b8 < =",
"eq_num": "(3)"
}
],
"section": "NA\u00cfVE BAYES ASSUMPTION AND DOCUMENT CLASSIFICATION",
"sec_num": "2."
},
{
"text": "Based on standard Naive Bayes assumption, the words of a document are generated independently of context, that is, independently of the other words in the same document given the class model. We further assume that the probability of a word is independent of its position within the document. Combine (1) and (2), ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NA\u00cfVE BAYES ASSUMPTION AND DOCUMENT CLASSIFICATION",
"sec_num": "2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": ", | | 1 ( | , ) ( | , ) i i j d i k d k j p d C p w C \u03b8 \u03b8 = = \u220f",
"eq_num": "("
}
],
"section": "NA\u00cfVE BAYES ASSUMPTION AND DOCUMENT CLASSIFICATION",
"sec_num": "2."
},
{
"text": "In this section, we describe the overall framework of the proposed approach. Suppose given a single term,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RELEVANT CONCEPTS EXTRACTION",
"sec_num": "3."
},
{
"text": "T , and its relevant concepts are our interest. The first step of the approach is to send T into search engines to retrieve the relevant documents as the corpus. Note that the retrieved documents are the so-called snippets defined in [2] . The detailed process of the approach is described below.",
"cite_spans": [
{
"start": 234,
"end": 237,
"text": "[2]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "RELEVANT CONCEPTS EXTRACTION",
"sec_num": "3."
},
{
"text": "Suppose given a single term, T; then the process of relevant-concept extractions is designed as: Step 1. Send T into search engines to retrieve N snippets as the Web-based corpus, DT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The proposed Approach",
"sec_num": "3.1"
},
{
"text": "Step 2. Apply the Greedy EM algorithm to cluster DT into K mixtures (clusters),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The proposed Approach",
"sec_num": "3.1"
},
{
"text": "1 { } K k k C = , where K is dynamically determined.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The proposed Approach",
"sec_num": "3.1"
},
{
"text": "Step 3. For each Ck, k=1 to K, choose the term (s) with the highest weighted log likelihood ratio as the label (s) of Ck.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The proposed Approach",
"sec_num": "3.1"
},
{
"text": "Because we have no idea about the exact number of concepts strongly associated with each given term, thus for each term it's straightforward to apply the Greedy EM algorithm to clustering the relevant documents into an auto-determined number of clusters. The algorithm is a top-down clustering algorithm which is based on the assumptions of the theoretical evidence developed in [5, 7] . Its basic idea is to suppose that all the relevant documents belong to one component (concept cluster) at the initial stage, then successively adding one more component (concept cluster) and redistributing the relevant documents step by step until the maximal likelihood is approached. Figure 1 shows the proposed approach and it is summarized in the following. e) Keep i \u03b8 fixed, and use partial EM techniques, described in section 3.2.3, to update",
"cite_spans": [
{
"start": 379,
"end": 382,
"text": "[5,",
"ref_id": "BIBREF4"
},
{
"start": 383,
"end": 385,
"text": "7]",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 674,
"end": 682,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Greedy EM Algorithm",
"sec_num": "3.2"
},
{
"text": "1 K \u03b8 + . f) Set 1 1 1 | { , } i K k C t k k w C \u03b8 \u03b8 \u03b8 + + = = . Calculate the likelihood, 1 ( ) i L \u03b8 + . g) Stop if 1 ( ) i L \u03b8 + < ( ) i L \u03b8 ;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Greedy EM Algorithm",
"sec_num": "3.2"
},
{
"text": "otherwise, return to c) and set K=K+1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Greedy EM Algorithm",
"sec_num": "3.2"
},
{
"text": "As described previously, all relevant documents belong to one mixture initially; then check the likelihood to see if it is proper to add a new mixture. Thus, given K mixture components, the likelihood of K+1 is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Likelihood Function",
"sec_num": "3.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 1 ( ) (1 ) ( ) ( , ) K T K T T K L D L D D \u03b1 \u03b1 \u03c6 \u03b8 + + = \u2212 +",
"eq_num": "(5)"
}
],
"section": "Likelihood Function",
"sec_num": "3.2.1"
},
{
"text": "with \u03b1 in (0,1), where otherwise, reallocate a new one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Likelihood Function",
"sec_num": "3.2.1"
},
{
"text": "In [7] , a vector space model, initializing the newly added mixture is to calculate the first derivation with respect to \u03b1 and to assume that the covariance matrix is a constant matrix. However, in our proposed probability framework, it is much more complicated because of a large amount of word probabilities, ",
"cite_spans": [
{
"start": 3,
"end": 6,
"text": "[7]",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Initialize Allocated Mixture",
"sec_num": "3.2.2"
},
{
"text": "In order to simplify the updating problem, we take advantage of partial EM algorithm for locally search the maxima of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Update with Partial EM Algorithm",
"sec_num": "3.2.3"
},
{
"text": "1 ( ) K T L D + .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Update with Partial EM Algorithm",
"sec_num": "3.2.3"
},
{
"text": "A notable property is that the original modeling for k=1 to K are fixed, only ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Update with Partial EM Algorithm",
"sec_num": "3.2.3"
},
{
"text": "1 K \u03b8 + is updated. 1 { ( | )} 1 ( , ) ( | ) | | ( , ) ( | ) T T V t t n n s n n t K t K V D n D V K s n t C K w p w C N w d p C d V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Update with Partial EM Algorithm",
"sec_num": "3.2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "+ = = + + \u2211",
"eq_num": "(7)"
}
],
"section": "Update with Partial EM Algorithm",
"sec_num": "3.2.3"
},
{
"text": "where |V | and |DT| means the number of vocabularies and the number of documents shown in the T D respectively. Since only the parameters of the new components are updated, partial EM steps constitute a simple and fast method for locally searching for the maxima of 1 K L + , without needing to resort to other computationally demanding nonlinear optimization methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Update with Partial EM Algorithm",
"sec_num": "3.2.3"
},
{
"text": "In the process, the documents in the training data T D will be clustered with their similarity into a set of clusters and keywords that can represent the concept of each cluster will be extracted. After clustering the relevant documents into several clusters, the distribution of each cluster in a probabilistic form can be calculated with the data in the cluster by applying the Greedy EM algorithm already described previously.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyword Generation",
"sec_num": "3.3"
},
{
"text": "Next, we have to discover the hidden semantics inside each document cluster. However, retrieving the hidden semantics from a set of documents is a big issue. For convenience, we simply represent the meaning of a cluster with the word that has the highest weighted log likelihood ratio 1 among the contained words in this cluster. With this assumption, the \"representative\" word could be chosen directly by comparing",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyword Generation",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( | ) ( | ) ( | )log ( | ) t k t k t k k t p w C WLR w C p w C p w C \uf8eb \uf8f6 = \uf8ec \uf8f7 \uf8ed \uf8f8",
"eq_num": "(8)"
}
],
"section": "Keyword Generation",
"sec_num": "3.3"
},
{
"text": "for k=1 to K, where ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyword Generation",
"sec_num": "3.3"
},
{
"text": "( | ) k t C p w",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyword Generation",
"sec_num": "3.3"
},
{
"text": "In real world, for an unknown term, its associated concepts are what we are interested in; thus, in this section, we will show the experiment results obtained in evaluating a set of test terms. Before the larger amount of experiment, let's preview the experiment of \"ATM\" to determine the number of retrieved relevant documents. Google (http://www.google.com) is the main search engine which we utilized in the following experiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EXPERIMENTS",
"sec_num": "4."
},
{
"text": "1 The sum of this quantity over all words is the Kullback-Leibler divergence between the distribution of words in k C and the distribution of words in k C , (Cover and Thomas, 1991).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appropriate Number of Retrieved Relevant Documents",
"sec_num": "4.1"
},
{
"text": "We assume that too many retrieved documents will cause noises, but too few won't contain enough information about this unknown term. Thus, the appropriate number of retrieved relevant documents has to be decided. \"ATM\" in dictionary has six hidden semantics, which are \"Automated Teller Machine\", \"Asynchronous Transfer Mode\", \"Act of Trade Marks\", \"Air Traffic Management\", \"Atmosphere\" and \"Association of Teachers of Mathematics\" respectively. Table 1 shows the bi-gram extracted concepts via number of retrieved texts. Table 1 shows a challenge that choosing the term with the highest weighted log likelihood ratio as the label of one concept cluster can not effectively describe its complete semantics appropriately; in addition, for example, \"Automated Teller Machine\" is composed of many aspects, like security, location, cards, and etc. Thus, concept domain of \"Automated Teller Machine\" could be figured out while \"ATM applications\", \"ATM locations\", \"ATM surcharges\", and some other aspects associated with \"Automated Teller Machine\" being extracted. Similarly, \"Air Traffic Management\" could be figured out while \"public transport\", \"air traffic\" being extracted.",
"cite_spans": [],
"ref_spans": [
{
"start": 447,
"end": 454,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 523,
"end": 530,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Appropriate Number of Retrieved Relevant Documents",
"sec_num": "4.1"
},
{
"text": "Except the six hidden semantic clusters in ATM, some other concept clusters were also extracted, e.g. \"Amateur Telescope Maker\" because of \"telescope makers\" extracted and \"Ataxia Telangiectasia Mutated\" because of \"ataxia teleangiectasia\" extracted. One more interesting thing is that the more retrieved relevant documents not necessarily direct to the more extracted concept clusters (Figure 1 ). This phenomenon is caused from the extra noises extracted from the more relevant documents. The extra noises will not only worsen the performance of the greedy EM algorithm but also generate improper relevant terms from the clustered groups, which will not be considered as \"good\" categories manually. For each test term, considering the time cost and the marginal gain of extracted concepts, 600 relevant documents were retrieved from Web. Of course 600 relevant documents are not always appropriate for all cases, but for convenience, it was adopted.",
"cite_spans": [],
"ref_spans": [
{
"start": 386,
"end": 395,
"text": "(Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Appropriate Number of Retrieved Relevant Documents",
"sec_num": "4.1"
},
{
"text": ",1 ,2 ,| | , , | | ( | , ) ( , , ,| , )( | , , , ) i k k k j i i i d i i i j i z d d d d d d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We have presented a potential approach to finding relevant concepts for terms via utilizing World Wide Web. This approach obtained an encouraging experimental result in testing Yahoo!'s computer science hierarchy. However, the work needs more in-depth study. As what we mentioned previously, choosing the word with the highest weighted log likelihood ratio as the concept of a clustered group after the Greedy EM algorithm does not provide enough representative. In addition, one concept usually contains many domains, e.g. \"ATM\" contains security, teller machine, transaction cost, and etc. Thus, distinguishing the extracted keywords into a certain concept still needs human intervention. On the other hand, in order to solve the problem of \"too much effort\" of the Greedy EM algorithm, we need to modify it with another convergence criterion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSIONS AND FUTURE WORK",
"sec_num": "5."
},
{
"text": "The experiments took the \"Computer Science\" hierarchy in Yahoo! as the evaluation. There were totally 36 concepts in second level in the \"Computer Science\" hierarchy (as in Table 2 ), 177 objects in the third level and 278 objects in fourth level, all rooted at the concept \"Computer Science\". We divided the objects in thirdlevel and fourth-level into three groups: full articles, which were the Web pages linked from Yahoo!'s website list under the Computer Science hierarchy, short documents, which were the site description offered by Yahoo!, and text segments, which were the directory names. We randomly chose 30 text segments from the third-level plus the fourth-level objects. The 30 proper nouns are shown in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 173,
"end": 180,
"text": "Table 2",
"ref_id": null
},
{
"start": 718,
"end": 725,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Description",
"sec_num": "4.2"
},
{
"text": "In Section 3, the Greedy EM algorithm is treated as the unsupervised learning method which clusters retrieved relevant documents to extract hidden concepts for each test term. Table 4 shows the extracted bi-gram concept clusters for the 30 randomly chosen CS terms; this means that only bi-gram terms in the retrieved documents were extracted. The number of hidden concept clusters in each term was determined automatically by the Greedy EM algorithm. Table 4 , it is encouraging that the proposed approach extracted the main idea for most test CS terms. Taking \"Trigonometry\" for example, if we have no idea about \"Trigonometry\", then from \"function\" and \"algebra\" in Table 4 , there is not difficult to guess that it may be a kind of mathematical functions and developed by Benjamin Bannekers. Again, our proposed approach caught that \"Darwin\" is not only a British Naturalist but also the name of graphical software.Even though the experiment result shows encouraging performance, the result was still bothered by many duplicated and noisy aspects. For example, \"CORBA\" means \"Common Object Request Broker Architecture\"; however, \"C++ software\" and \"application development\" actually only provide vague or not necessary information about \"CORBA\". This was caused by the \"too much effort\" of the Greedy EM algorithm, which clusters the retrieved mixtures into too many groups.",
"cite_spans": [],
"ref_spans": [
{
"start": 176,
"end": 183,
"text": "Table 4",
"ref_id": null
},
{
"start": 452,
"end": 459,
"text": "Table 4",
"ref_id": null
},
{
"start": 669,
"end": 676,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Relevant-Concepts Extraction",
"sec_num": "4.3"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Data Clustering: A Review",
"authors": [
{
"first": "A",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Murty",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Flynn",
"suffix": ""
}
],
"year": 1999,
"venue": "ACM Computing Surveys",
"volume": "31",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Jain, M. Murty, and P. Flynn. Data Clustering: A Review. In ACM Computing Surveys, 31(3), September 1999.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "LiveClassifier: Creating Hierarchical Text Classifiers through Web Corpora",
"authors": [
{
"first": "C",
"middle": [
"C"
],
"last": "Huang",
"suffix": ""
},
{
"first": "S",
"middle": [
"L"
],
"last": "Chuang",
"suffix": ""
},
{
"first": "L",
"middle": [
"F"
],
"last": "Chien",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. C. Huang, S. L. Chuang and L. F. Chien. LiveClassifier: Creating Hierarchical Text Classifiers through Web Corpora, WWW (2004).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Clustering Algorithms. In Information Retrieval Data Structures and Algorithms, William Frakes and Ricardo Baeza-Yates",
"authors": [
{
"first": "E",
"middle": [],
"last": "Rasmussen",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Rasmussen. Clustering Algorithms. In Information Retrieval Data Structures and Algorithms, William Frakes and Ricardo Baeza-Yates, editors, Prentice Hall, 1992.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The Cluster Hypothesis Revisited",
"authors": [
{
"first": "E",
"middle": [],
"last": "Voorhees",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of SIGIR 1985",
"volume": "",
"issue": "",
"pages": "95--104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Voorhees. The Cluster Hypothesis Revisited. In Proceedings of SIGIR 1985, 95-104.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Efficient Greedy Learning of Gaussian Mixture Models",
"authors": [
{
"first": "J",
"middle": [
"J"
],
"last": "Verbeek",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Vlassis",
"suffix": ""
},
{
"first": "B",
"middle": [
"J A"
],
"last": "Krose",
"suffix": ""
}
],
"year": 2003,
"venue": "Neural Computation",
"volume": "15",
"issue": "2",
"pages": "469--485",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. J. Verbeek, N. Vlassis and B. J. A. Krose. Efficient Greedy Learning of Gaussian Mixture Models. Neural Computation, 15 (2), pp.469-485, 2003.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Mixture Density Estimation",
"authors": [
{
"first": "J",
"middle": [
"Q"
],
"last": "Li",
"suffix": ""
},
{
"first": "A",
"middle": [
"R"
],
"last": "Barron",
"suffix": ""
}
],
"year": 2000,
"venue": "Advances in Neurarl Information processing Systems",
"volume": "12",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Q. Li and A. R. Barron. Mixture Density Estimation. In Advances in Neurarl Information processing Systems 12, The MIT Press, 2000.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A Greedy Algorithm for Gaussian Mixture Learning",
"authors": [
{
"first": "N",
"middle": [],
"last": "Vlassis",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Likas",
"suffix": ""
}
],
"year": 2002,
"venue": "Neural Processing Letters (15)",
"volume": "",
"issue": "",
"pages": "77--87",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Vlassis and A. Likas A Greedy Algorithm for Gaussian Mixture Learning. In Neural Processing Letters (15), pp. 77-87, 2002.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Recent Trends in Hierarchic Document Clustering: A Critical Review",
"authors": [
{
"first": "P",
"middle": [],
"last": "Willett",
"suffix": ""
}
],
"year": 1988,
"venue": "Information Processing and Management",
"volume": "24",
"issue": "",
"pages": "577--597",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Willett. Recent Trends in Hierarchic Document Clustering: A Critical Review. In Information Processing and Management, 24(5), 577-597, 1988.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A Probabilistic Analysis of the Rocchio Algorithm with TFIDF for Text Categorization",
"authors": [
{
"first": "T",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": null,
"venue": "Machine Learning: Proceedings of the Fourteenth International Conference (ICML '97)",
"volume": "",
"issue": "",
"pages": "143--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Joachims. A Probabilistic Analysis of the Rocchio Algorithm with TFIDF for Text Categorization. In Machine Learning: Proceedings of the Fourteenth International Conference (ICML '97), pp. 143-151.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "means the jth words and | | i d means the number of words in i d . Second, every document is assumed generated by a mixture of components{ } k C (relevant concept clusters), for k=1 to K. Thus, we can characterize the likelihood of document i d with a sum of total probability over all mixture components:",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "Thus, the parameters of an individual class are the collection of word probabilities, The other parameters are the weight of mixture class, ( | ) k p C \u03b8 , that is, the prior probabilities of class, k C . The set of parameters is | As will be described in next section, the proposed document clustering is designed fully based on the parameters.",
"num": null
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"text": "then stop the allocation of new mixture;",
"num": null
},
"FIGREF4": {
"type_str": "figure",
"uris": null,
"text": ", we take the approximation of \u03b1 in[6",
"num": null
},
"FIGREF6": {
"type_str": "figure",
"uris": null,
"text": "the probabilities of word t w in those clusters except k C .",
"num": null
},
"TABREF0": {
"html": null,
"text": "Extracted",
"type_str": "table",
"content": "<table><tr><td/><td>concept clusters in \"ATM\" with respect to different numbers of retrieved</td></tr><tr><td/><td>relevant terms</td></tr><tr><td># of training</td><td>Extracted concept clusters</td></tr><tr><td>texts</td><td/></tr><tr><td>100</td><td>ATM {card, cell, internetworking, standards}, asynchronous transfer</td></tr><tr><td>200</td><td>ATM {access, information, networking, standards, locations}, credit union,</td></tr><tr><td/><td>debit cards , safety tips, teller machine, telescope makers</td></tr><tr><td>300</td><td>ATM {applications, cards, fees, networking, surcharges, locations},</td></tr><tr><td/><td>adaptation layers, credit union, debit cards, token rings, teller machine,</td></tr><tr><td>400</td><td>ATM {applications, crashes, services, transactions, networking, security},</td></tr><tr><td/><td>branch locator, financial institution, personal banking, public transport,</td></tr><tr><td/><td>telangiectasia mutated</td></tr><tr><td>500</td><td>ataxia telangiectasia, ATM {applications, asynchronous, crashes, products,</td></tr><tr><td/><td>protocol, resource}</td></tr><tr><td>600</td><td>ataxia telangiectasia, ATM {applications, crashes, protocol, technology},</td></tr><tr><td/><td>atmospheric science, communication technology, electronics engineering,</td></tr><tr><td/><td>network interface, public transport, wan switches, rights reserved</td></tr><tr><td>700</td><td>air traffic, ataxia telangiectasia, ATM {crashes, debit, encryptor, protocol,</td></tr><tr><td/><td>surcharge, traffic}, atmospheric science, checking account, communication</td></tr><tr><td/><td>technology, electronics engineering, network interface, public transport</td></tr><tr><td>800</td><td>ataxia telangiectasia, ATM {adapters, crime, debit, protocol, surcharge,</td></tr><tr><td/><td>usage, cards}, atmospheric sciences, business checking, communication</td></tr><tr><td/><td>technology</td></tr><tr><td>900</td><td>24 hours, ataxia telangiectasia, ATM {adapters, connections, crashes,</td></tr><tr><td/><td>crime, debit, industry, protocol, resources}</td></tr><tr><td>1000</td><td>ATM networks, Arizona federal, 24 hour</td></tr></table>",
"num": null
}
}
}
}