{ "paper_id": "I05-1030", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:26:12.084794Z" }, "title": "PLSI Utilization for Automatic Thesaurus Construction", "authors": [ { "first": "Masato", "middle": [], "last": "Hagiwara", "suffix": "", "affiliation": { "laboratory": "", "institution": "Nagoya University", "location": { "addrLine": "Furo-cho, Chikusa-ku", "postCode": "464-8603", "settlement": "Nagoya", "country": "JAPAN" } }, "email": "hagiwara@kl.i.is.nagoya-u.ac.jp" }, { "first": "Yasuhiro", "middle": [], "last": "Ogawa", "suffix": "", "affiliation": { "laboratory": "", "institution": "Nagoya University", "location": { "addrLine": "Furo-cho, Chikusa-ku", "postCode": "464-8603", "settlement": "Nagoya", "country": "JAPAN" } }, "email": "yasuhiro@kl.i.is.nagoya-u.ac.jp" }, { "first": "Katsuhiko", "middle": [], "last": "Toyama", "suffix": "", "affiliation": { "laboratory": "", "institution": "Nagoya University", "location": { "addrLine": "Furo-cho, Chikusa-ku", "postCode": "464-8603", "settlement": "Nagoya", "country": "JAPAN" } }, "email": "toyama@kl.i.is.nagoya-u.ac.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "When acquiring synonyms from large corpora, it is important to deal not only with such surface information as the context of the words but also their latent semantics. This paper describes how to utilize a latent semantic model PLSI to acquire synonyms automatically from large corpora. PLSI has been shown to achieve a better performance than conventional methods such as tf\u2022idf and LSI, making it applicable to automatic thesaurus construction. Also, various PLSI techniques have been shown to be effective including: (1) use of Skew Divergence as a distance/similarity measure; (2) removal of words with low frequencies, and (3) multiple executions of PLSI and integration of the results.", "pdf_parse": { "paper_id": "I05-1030", "_pdf_hash": "", "abstract": [ { "text": "When acquiring synonyms from large corpora, it is important to deal not only with such surface information as the context of the words but also their latent semantics. This paper describes how to utilize a latent semantic model PLSI to acquire synonyms automatically from large corpora. PLSI has been shown to achieve a better performance than conventional methods such as tf\u2022idf and LSI, making it applicable to automatic thesaurus construction. Also, various PLSI techniques have been shown to be effective including: (1) use of Skew Divergence as a distance/similarity measure; (2) removal of words with low frequencies, and (3) multiple executions of PLSI and integration of the results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Thesauri, dictionaries in which words are arranged according to meaning, are one of the most useful linguistic sources, having a broad range of applications, such as information retrieval and natural language understanding. Various thesauri have been constructed so far, including WordNet [6] and Bunruigoihyo [14] . Conventional thesauri, however, have largely been compiled by groups of language experts, making the construction and maintenance cost very high. It is also difficult to build a domain-specific thesaurus flexibly. Thus it is necessary to construct thesauri automatically using computers.", "cite_spans": [ { "start": 289, "end": 292, "text": "[6]", "ref_id": "BIBREF5" }, { "start": 310, "end": 314, "text": "[14]", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Many studies have been done for automatic thesaurus construction. In doing so, synonym acquisition is one of the most important techniques, although a thesaurus generally includes other relationships than synonyms (e.g., hypernyms and hyponyms). To acquire synonyms automatically, contextual features of words, such as co-occurrence and modification are extracted from large corpora and often used. Hindle [7] , for example, extracted verb-noun relationships of subjects/objects and their predicates from a corpus and proposed a method to calculate similarity of two words based on their mutual information. Although methods based on such raw co-occurrences are simple yet effective, in a naive implementation some problems arise: namely, noises and sparseness. Being a collection of raw linguistic data, a corpus generally contains meaningless information, i.e., noises. Also, co-occurrence data extracted from corpora are often very sparse, making them inappropriate for similarity calculation, which is also known as the \"zero frequency problem.\" Therefore, not only surface information but also latent semantics should be considered when acquiring synonyms from large corpora.", "cite_spans": [ { "start": 406, "end": 409, "text": "[7]", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Several latent semantic models have been proposed so far, mainly for information retrieval and document indexing. The most commonly used and prominent ones are Latent Semantic Indexing (LSI) [5] and Probabilistic LSI (PLSI) [8] . LSI is a geometric model based on the vector space model. It utilizes singular value decomposition of the co-occurrence matrix, an operation similar to principal component analysis, to automatically extract major components that contribute to the indexing of documents. It can alleviate the noise and sparseness problems by a dimensionality reduction operation, that is, by removing components with low contributions to the indexing. However, the model lacks firm, theoretical basis [9] and the optimality of inverse document frequency (idf) metric, which is commonly used to weight elements, has yet to be shown [13] .", "cite_spans": [ { "start": 191, "end": 194, "text": "[5]", "ref_id": "BIBREF4" }, { "start": 224, "end": 227, "text": "[8]", "ref_id": "BIBREF7" }, { "start": 713, "end": 716, "text": "[9]", "ref_id": "BIBREF8" }, { "start": 843, "end": 847, "text": "[13]", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "On the contrary, PLSI, proposed by Hofmann [8] , is a probabilistic version of LSI, where it is formalized that documents and terms co-occur through a latent variable. PLSI puts no assumptions on distributions of documents or terms, while LSI performs optimal model fitting, assuming that documents and terms are under Gaussian distribution [9] . Moreover, ad hoc weighting such as idf is not necessary for PLSI, although it is for LSI, and it is shown experimentally to outperform the former model [8] .", "cite_spans": [ { "start": 43, "end": 46, "text": "[8]", "ref_id": "BIBREF7" }, { "start": 341, "end": 344, "text": "[9]", "ref_id": "BIBREF8" }, { "start": 499, "end": 502, "text": "[8]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This study applies the PLSI model to the automatic acquisition of synonyms by estimating each word's latent meanings. First, a number of verb-noun pairs were collected from a large corpus using heuristic rules. This operation is based on the assumption that semantically similar words share similar contexts, which was also employed in Hindle's work [7] and has been shown to be considerably plausible. Secondly, the co-occurrences obtained in this way were fit into the PLSI model, and the probability distribution of latent classes was calculated for each noun. Finally, similarity for each pair of nouns can be calculated by measuring the distances or the similarity between two probability distributions using an appropriate distance/similarity measure. We then evaluated and discussed the results using two evaluation criteria, discrimination rates and scores.", "cite_spans": [ { "start": 350, "end": 353, "text": "[7]", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper also discusses basic techniques when applying PLSI to the automatic acquisition of synonyms. In particular, the following are discussed from methodological and experimental views: (1) choice of distance/similarity measures between probability distributions; (2) filtering words according to their frequencies of occurrence; and (3) multiple executions of PLSI and integration of the results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper is organized as follows: in Sect. 2 a brief explanation of the PLSI model and calculation is provided, and Sect. 3 outlines our approach. Sect. 4 shows the results of comparative experiments and basic techniques. Sect. 5 concludes this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This section provides a brief explanation of the PLSI model in information retrieval settings. The PLSI model, which is based on the aspect model, assumes that document d and term w co-occur through latent class z, as shown in Fig. 1 (a) .", "cite_spans": [], "ref_spans": [ { "start": 227, "end": 237, "text": "Fig. 1 (a)", "ref_id": null } ], "eq_spans": [], "section": "The PLSI Model", "sec_num": "2" }, { "text": "The co-occurrence probability of documents and terms is given by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The PLSI Model", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (d, w) = P (d) z P (z|d)P (w|z).", "eq_num": "(1)" } ], "section": "The PLSI Model", "sec_num": "2" }, { "text": "Note that this model can be equivalently rewritten as whose graphical model representation is shown in Fig. 1 (b) . This is a symmetric parameterization with respect to documents and terms. The latter parameterization is used in the experiment section because of its simple implementation.", "cite_spans": [], "ref_spans": [ { "start": 103, "end": 113, "text": "Fig. 1 (b)", "ref_id": null } ], "eq_spans": [], "section": "The PLSI Model", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (d, w) = z P (z)P (d|z)P (w|z),", "eq_num": "(2)" } ], "section": "The PLSI Model", "sec_num": "2" }, { "text": "Theoretically, probabilities P (d), P (z|d), P (w|z) are determined by maximum likelihood estimation, that is, by maximizing the likelihood of document term co-occurrence:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The PLSI Model", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L = d,w N (d, w) log P (d, w),", "eq_num": "(3)" } ], "section": "The PLSI Model", "sec_num": "2" }, { "text": "where N (d, w) is the frequency document d and term w co-occur. While the co-occurrence of document d and term w in the corpora can be observed directly, the contribution of latent class z cannot be directly seen in this model. For the maximum likelihood estimation of this model, the EM algorithm [1] , which is used for the estimation of systems with unobserved (latent) data, is used. The EM algorithm performs the estimation iteratively, similar to the steepest descent method.", "cite_spans": [ { "start": 298, "end": 301, "text": "[1]", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "The PLSI Model", "sec_num": "2" }, { "text": "The original PLSI model, as described above, deals with co-occurrences of documents and terms, but it can also be applied to verbs and nouns in the corpora. In this way, latent (\"give\", \"to\", \"colleague\")", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "PP VP VBD n NP S VP v (v, subj, n) (v, obj, n) (v, prep, n)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "but (v, obj, n) when the verb is \"be\" + past participle.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "n NP VP baseVP v n NP PP prep PP * VP baseVP v Rule 1\uff1a", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "Rule 2\uff1a", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "Rule 3\uff1a", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "(e) Rules for co-occurrence identification Fig. 3 . Co-occurrence extraction class distribution, which can be interpreted as latent \"meaning\" corresponding to each noun, is obtained. Semantically similar words are then obtained accordingly, because words with similar meaning have similar distributions. Fig. 2 outlines our approach, and the following subsections provide the details.", "cite_spans": [], "ref_spans": [ { "start": 43, "end": 49, "text": "Fig. 3", "ref_id": null }, { "start": 304, "end": 310, "text": "Fig. 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "We adopt triples (v, c, n) extracted from the corpora as co-occurrences fit into the PLSI model, where v, c, and n represent a verb, case/preposition, and a noun, respectively. The relationships between nouns and verbs, expressed by c, include case relation (subject and object) as well as what we call here \"prepositional relation,\" that is, a cooccurrence through a preposition. Take the following sentence for example:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extraction of Co-occurrence", "sec_num": "3.1" }, { "text": "John gave presents to his colleagues.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extraction of Co-occurrence", "sec_num": "3.1" }, { "text": "First, the phrase structure ( Fig. 3(b) ) is obtained by parsing the original sentence ( Fig. 3(a) ). The resulting tree is then used to derive the dependency structure ( Fig. 3(c) ), using Collins' method [4] . Note that dependencies in baseNPs (i.e., noun phrases that do not contain NPs as their child constituents, shown as the groups of words enclosed by square brackets in Fig. 3(c) ), are ignored. Also, we introduced baseVPs, that is, sequences of verbs 1 , modals (MD), or adverbs (RB), of which the last word must be a verb. BaseVPs simplify the handling of sequences of verbs such as \"might not be\" and \"is always complaining.\" The last word of a baseVP represents the entire baseVP to which it belongs. That is, all the dependencies directed to words in a baseVP are redirected to the last verb of the baseVP.", "cite_spans": [ { "start": 206, "end": 209, "text": "[4]", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 30, "end": 39, "text": "Fig. 3(b)", "ref_id": null }, { "start": 89, "end": 98, "text": "Fig. 3(a)", "ref_id": null }, { "start": 171, "end": 180, "text": "Fig. 3(c)", "ref_id": null }, { "start": 379, "end": 388, "text": "Fig. 3(c)", "ref_id": null } ], "eq_spans": [], "section": "Extraction of Co-occurrence", "sec_num": "3.1" }, { "text": "Finally, co-occurrences are extracted and identified by matching the dependency patterns and the heuristic rules for extraction, which are all listed in Fig. 3 (e). For example, since the label of the dependency \"John\" \u2192\"gave\" is \"NP S VP\", the noun \"John\" is identified as the subject of the verb \"gave\" ( Fig. 3(d) ). Likewise, the dependencies \"presents\"\u2192\"gave\" and \"his colleagues\"\u2192\"to\"\u2192\"gave\" are identified as a verb-object relation and prepositional relation through \"to\".", "cite_spans": [], "ref_spans": [ { "start": 153, "end": 159, "text": "Fig. 3", "ref_id": null }, { "start": 307, "end": 316, "text": "Fig. 3(d)", "ref_id": null } ], "eq_spans": [], "section": "Extraction of Co-occurrence", "sec_num": "3.1" }, { "text": "A simple experiment was conducted to test the effectiveness of this extraction method, using the corpus and the parser mentioned in the experiment section. Cooccurrence extraction was performed for the 50 sentences randomly extracted from the corpus, and precision and recall turned out to be 88.6% and 78.1%, respectively. In this context, precision is more important than recall because of the substantial size of the corpus, and some of the extraction errors result from parsing error caused by the parser, whose precision is claimed to be around 90% [2] . Therefore, we conclude that this method and its performance are sufficient for our purpose.", "cite_spans": [ { "start": 554, "end": 557, "text": "[2]", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Extraction of Co-occurrence", "sec_num": "3.1" }, { "text": "While the PLSI model deals with dyadic data (d, w) of document d and term w, the cooccurrences obtained by our method are triples (v, c, n) of a verb v, a case/preposition c, and a noun n. To convert these triples into dyadic data (pairs), verb v and case/ preposition c are paired as (v, c) and considered a new \"virtual\" verb v. This enables it to handle the triples as the co-occurrence (v, n) of verb v and noun n to which the PLSI model becomes applicable. Pairing verb v and case/preposition c also has a benefit that such phrasal verbs as \"look for\" or \"get to\" can be naturally treated as a single verb.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Applying PLSI to Extracted Co-occurence Data", "sec_num": "3.2" }, { "text": "After the application of PLSI, we obtain probabilities P (z), P (v|z), and P (n|z). Using Bayes theorem, we then obtain P (z|n), which corresponds to the latent class distribution for each noun. In other words, distribution P (z|n) represents the features of meaning possessed by noun n. Therefore, we can calculate the similarity between nouns n 1 and n 2 by measuring the distance or similarity between the two corresponding distribution, P (z|n 1 ) and P (z|n 2 ), using an appropriate measure. The choice of measure affects the synonym acquisition results and experiments on comparison of distance/similarity measures are detailed in Sect. 4.3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Applying PLSI to Extracted Co-occurence Data", "sec_num": "3.2" }, { "text": "This section includes the results of comparison experiments and those on the basic PLSI techniques.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "The automatic acquisition of synonyms was conducted according to the method described in Sect. 3, using WordBank (190,000 sentences, 5 million words) [3] as a cor-pus. Charniak's parser [2] was used for parsing and TreeTagger [16] for stemming. A total of 702,879 co-occurrences was extracted by the method described in Sect. 3.1.", "cite_spans": [ { "start": 150, "end": 153, "text": "[3]", "ref_id": "BIBREF2" }, { "start": 186, "end": 189, "text": "[2]", "ref_id": "BIBREF1" }, { "start": 226, "end": 230, "text": "[16]", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Conditions", "sec_num": "4.1" }, { "text": "When using EM algorithm to implement PLSI, overfitting, which aggravates the performance of the resultant language model, occasionally occurs. We employed the tempered EM (TEM) [8] algorithm, instead of a naive one, to avoid this problem. TEM algorithm is closely related to the deterministic annealing EM (DAEM) algorithm [17] , and helps avoid local extrema by introducing inverse temperature \u03b2. The parameter was set to \u03b2 = 0.86, considering the results of the preliminary experiments.", "cite_spans": [ { "start": 177, "end": 180, "text": "[8]", "ref_id": "BIBREF7" }, { "start": 323, "end": 327, "text": "[17]", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Conditions", "sec_num": "4.1" }, { "text": "As the similarity/distance measure and frequency threshold t f , Skew Divergence (\u03b1 = 0.99) and t f = 15 were employed in the following experiments in response to the results from the experiments described in Sects. 4.3 and 4.5. Also, because estimation by EM algorithm is started from the random parameters and consequently the PLSI results change every time it is executed, the average performance of the three executions was recorded, except in Sect. 4.6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditions", "sec_num": "4.1" }, { "text": "The following two measures, discrimination rate and scores, were employed for the evaluation of automated synonym acquisition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measures for Performance", "sec_num": "4.2" }, { "text": "Discrimination rate Discrimination rate, originally proposed by Kojima et al. [10] , is the rate (percentage) of pairs (w 1 , w 2 ) whose degree of association between two words w 1 , w 2 is successfully discriminated by the similarity derived by a method. Kojima et al. dealt with three-level discrimination of a pair of words, that is, highly related (synonyms or nearly synonymous), moderately related (a certain degree of association), and unrelated (irrelevant). However, we omitted the moderately related level and limited the discrimination to two-level: high or none, because of the high cost of preparing a test set that consists of moderately related pairs.", "cite_spans": [ { "start": 78, "end": 82, "text": "[10]", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Measures for Performance", "sec_num": "4.2" }, { "text": "The calculation of discrimination rate follows these steps: first, two test sets, one of which consists of highly related word pairs and the other of unrelated ones, were prepared, as shown in Fig. 4 . The similarity between w 1 and w 2 is then calculated for each pair (w 1 , w 2 ) in both test sets via the method under evaluation, and the pair is labeled highly related when similarity exceeds a given threshold t and unrelated when the similarity is lower than t. The number of pairs labeled highly related in the highly related test set and unrelated in the unrelated test set are denoted n a and n b , respectively. The discrimination rate is then given by:", "cite_spans": [], "ref_spans": [ { "start": 193, "end": 199, "text": "Fig. 4", "ref_id": null } ], "eq_spans": [], "section": "Measures for Performance", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 2 n a N a + n b N b ,", "eq_num": "(4)" } ], "section": "Measures for Performance", "sec_num": "4.2" }, { "text": "where N a and N b are the numbers of pairs in highly related and unrelated test sets, respectively. Since the discrimination rate changes depending on threshold t, maximum value is adopted by varying t. We created a highly related test set using the synonyms in WordNet [6] . Pairs in a unrelated test set were prepared by first choosing two words randomly and then confirmed by hand whether the consisting two words are truly irrelevant. Table 5 . Procedure for score calculation", "cite_spans": [ { "start": 270, "end": 273, "text": "[6]", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 439, "end": 446, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Measures for Performance", "sec_num": "4.2" }, { "text": "We propose a score which is similar to precision used for information retrieval evaluation, but different in that it considers the similarity of words. This extension is based on the notion that the more accurately the degrees of similarity are assigned to the results of synonym acquisition, the higher the score values should be. Described in the following, along with Table 5 , is the procedure for score calculation. Table 5 shows the obtained synonyms and their similarity with respect to the base word \"computer.\" Results are obtained by calculating the similarity between the base word and each noun, and ranking all the nouns in descending order of similarity sim. The highest five are used for calculations in this example.", "cite_spans": [], "ref_spans": [ { "start": 371, "end": 378, "text": "Table 5", "ref_id": null }, { "start": 421, "end": 428, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Scores", "sec_num": null }, { "text": "The range of similarity varies based on such factors as the employed distance/ similarity measure, which unfavorably affects the score value. To avoid this, the values of similarity are normalized such that their sum equals one, as shown in the column sim * in Fig. 5 . Next, the relevance of each synonym to the base word is checked and evaluated manually, giving them three-level grades: highly related (A), moderately related (B), and unrelated (C), and relevance scores p = 1.0, 0.5, 0.0 are assigned for each grade, respectively (\"rel.(p)\" column in Fig. 5 ). Finally, each relevance score p is multiplied by corresponding similarity sim * , and the products (the p \u2022 sim * column in Fig. 5 ) are totaled and then multiplied by 100 to obtain a score, which is 55 in this case. In actual experiments, thirty words chosen randomly were adopted as base words, and the average of the scores of all base words was employed. Although this example considers only the top five words for simplicity, the top twenty words were used for evaluation in the following experiments.", "cite_spans": [], "ref_spans": [ { "start": 261, "end": 267, "text": "Fig. 5", "ref_id": null }, { "start": 555, "end": 561, "text": "Fig. 5", "ref_id": null }, { "start": 689, "end": 695, "text": "Fig. 5", "ref_id": null } ], "eq_spans": [], "section": "Scores", "sec_num": null }, { "text": "The choice of distance measure between two latent class distributions P (z|n i ), P (z|n j ) affects the performance of synonym acquisition. Here we focus on the following seven distance/similarity measures and compare their performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distance/Similarity Measures of Probability Distribution", "sec_num": "4.3" }, { "text": "-Kullback-Leibler (KL) divergence [12] : [12] : JS(p, q) = {KL(p || m)+KL(q || m)}/2, m = (p + q)/2 -Skew Divergence [11] :", "cite_spans": [ { "start": 34, "end": 38, "text": "[12]", "ref_id": "BIBREF11" }, { "start": 41, "end": 45, "text": "[12]", "ref_id": "BIBREF11" }, { "start": 117, "end": 121, "text": "[11]", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Distance/Similarity Measures of Probability Distribution", "sec_num": "4.3" }, { "text": "KL(p || q) = x p(x) log(p(x)/q(x)) -Jensen-Shannon (JS) divergence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distance/Similarity Measures of Probability Distribution", "sec_num": "4.3" }, { "text": "s \u03b1 (p || q) = KL(p || \u03b1q + (1 \u2212 \u03b1)p) -Euclidean distance: euc(p, q) = ||p \u2212 q|| -L 1 distance: L 1 (p, q) = x |p(x) \u2212 q(x)| -Inner product: p \u2022 q = x p(x)q(x) -Cosine: cos(p, q) = (p \u2022 q)/||p|| \u2022 ||q||", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distance/Similarity Measures of Probability Distribution", "sec_num": "4.3" }, { "text": "KL divergence is widely used for measuring the distance between two probability distributions. However, it has such disadvantages as asymmetricity and zero frequency problem, that is, if there exists x such that p(x) = 0, q(x) = 0, the distance is not defined. JS divergence, in contrast, is considered the symmetrized KL divergence and has some favorable properties: it is bounded [12] and does not cause the zero frequency problem. Skew Divergence, which has recently been receiving attention, has also solved the zero frequency problem by introducing parameter \u03b1 and mixing the two distributions. It has shown that Skew Divergence achieves better performance than the other measures [11] . The other measures commonly used for calculation of the similarity/distance of two vectors, namely Euclidean distance, L 1 distance (also called Manhattan Distance), inner product, and cosine, are also included for comparison.", "cite_spans": [ { "start": 382, "end": 386, "text": "[12]", "ref_id": "BIBREF11" }, { "start": 686, "end": 690, "text": "[11]", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Distance/Similarity Measures of Probability Distribution", "sec_num": "4.3" }, { "text": "Notice that the first five measures are of distance (the more similar p and q, the lower value), whereas the others, inner product and cosine, are of similarity (the more similar p and q, the higher value). We converted distance measure D to a similarity measure sim by the following expression:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distance/Similarity Measures of Probability Distribution", "sec_num": "4.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "sim(p, q) = exp{\u2212\u03bbD(p, q)},", "eq_num": "(5)" } ], "section": "Distance/Similarity Measures of Probability Distribution", "sec_num": "4.3" }, { "text": "inspired by Mochihashi and Matsumoto [13] . Parameter \u03bb was determined in such a way that the average of sim doesn't change with respect to D. Because KL divergence and Skew Divergence are asymmetric, the average of both directions (e.g. for KL divergence, 1 2 (KL(p||q) + KL(q||p))) is employed for the evaluation. Figure 6 shows the performance (discrimination rate and score) for each measure. It can be seen that Skew Divergence with parameter \u03b1 = 0.99 shows the highest performance of the seven, with a slight difference to JS divergence. These results, along with several studies, also show the superiority of Skew Divergence. In contrast, measures for vectors such as Euclidean distance achieved relatively poor performance compared to those for probability distributions.", "cite_spans": [ { "start": 37, "end": 41, "text": "[13]", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 316, "end": 324, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Distance/Similarity Measures of Probability Distribution", "sec_num": "4.3" }, { "text": "It may be difficult to estimate the latent class distributions for words with low frequencies because of a lack of sufficient data. These words can be noises that may degregate the results of synonym acquisition. Therefore, we consider removing such words with low frequencies before the execution of PLSI improves the performance. More specifically, we introduced threshold t f on the frequency, and removed nouns n i such that j tf i j < t f and verbs v j such that i tf i j < t f from the extracted co-occurrences. The discrimination rate change on varying threshold t f was measured and shown in Fig. 7 for d = 100, 200 , and 300. In every case, the rate increases with a moderate increase of t f , which shows the effectiveness of the removal of low frequency words. We consequently fixed t f = 15 in other experiments, although this value may depend on the corpus size in use. ", "cite_spans": [], "ref_spans": [ { "start": 600, "end": 623, "text": "Fig. 7 for d = 100, 200", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Word Filtering by Frequencies", "sec_num": "4.4" }, { "text": "Here the performances of PLSI and the following conventional methods are compared. In the following, N and M denote the numbers of nouns and verbs, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison Experiments with Conventional Methods", "sec_num": "4.5" }, { "text": "-tf: The number of co-occurrence tf i j of noun n i and verb v j is used directly for similarity calculation. The corresponding vector n i to noun n i is given by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison Experiments with Conventional Methods", "sec_num": "4.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "n i = t [tf i 1 tf i 2 ... tf i M ].", "eq_num": "(6)" } ], "section": "Comparison Experiments with Conventional Methods", "sec_num": "4.5" }, { "text": "-tf\u2022idf: The vectors given by tf method are weighted by idf. That is,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison Experiments with Conventional Methods", "sec_num": "4.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "n * i = t [tf i 1 \u2022 idf 1 tf i 2 \u2022 idf 2 ... tf i M \u2022 idf M ],", "eq_num": "(7)" } ], "section": "Comparison Experiments with Conventional Methods", "sec_num": "4.5" }, { "text": "where idf j is given by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison Experiments with Conventional Methods", "sec_num": "4.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "idf j = log(N/df j ) max k log(N/df k ) ,", "eq_num": "(8)" } ], "section": "Comparison Experiments with Conventional Methods", "sec_num": "4.5" }, { "text": "using df j , the number of distinct nouns that co-occur with verb v j . -tf+LSI: A co-occurrence matrix X is created using vectors n i defined by tf:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison Experiments with Conventional Methods", "sec_num": "4.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "X = [n 1 n 2 ... n N ],", "eq_num": "(9)" } ], "section": "Comparison Experiments with Conventional Methods", "sec_num": "4.5" }, { "text": "to which LSI is applied. -tf\u2022idf+LSI : A co-occurrence matrix X * is created using vectors n * i defined by tf\u2022idf:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison Experiments with Conventional Methods", "sec_num": "4.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "X * = [n * 1 n * 2 ... n * N ],", "eq_num": "(10)" } ], "section": "Comparison Experiments with Conventional Methods", "sec_num": "4.5" }, { "text": "to which LSI is applied. -Hindle's method: The method described in [7] is used. Whereas he deals only with subjects and objects as verb-noun co-occurrence, we used all the kinds of co-occurrence mentioned in Sect. The values of discrimination rate and scores are calculated for PLSI as well as the methods described above, and the results are shown in Fig. 8 . Because the number of latent classes d must be given beforehand for PLSI and LSI, the performances of the latent semantic models are measured varying d from 50 to 1,000 with a step of 50. The cosine measure is used for the similarity calculation of tf, tf\u2022idf, tf+LSI, and tf\u2022idf+LSI.", "cite_spans": [ { "start": 67, "end": 70, "text": "[7]", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 352, "end": 358, "text": "Fig. 8", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Comparison Experiments with Conventional Methods", "sec_num": "4.5" }, { "text": "The results reveal that the highest discrimination rate is achieved by PLSI, with the latent class number of approximately 100, although LSI overtakes with an increase of d. As for the scores, the performance of PLSI stays on top for almost all the values of d, strongly suggesting the superiority of PLSI over the conventional method, especially when d is small, which is often.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison Experiments with Conventional Methods", "sec_num": "4.5" }, { "text": "The performances of tf and tf+LSI, which are not weighted by idf, are consistently low regardless of the value of d. PLSI and LSI distinctly behave with respect to d, especially in the discrimination rate, whose cause require examination and discussion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison Experiments with Conventional Methods", "sec_num": "4.5" }, { "text": "In maximum likelihood estimation by EM algorithm, the initial parameters are set to values chosen randomly, and likelihood is increased by an iterative process. Therefore, the results are generally local extrema, not global, and they vary every execution, which is unfavorable. To solve this problem, we propose to execute PLSI several times and integrate the results to obtain a single one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integration of PLSI Results", "sec_num": "4.6" }, { "text": "To achieve this, PLSI is executed several times for the same co-occurrence data obtained via the method described in Sect. 3.1. This yields N values of similarity sim 1 (n i , n j ), ..., sim N (n i , n j ) for each noun pair (n i , n j ). These values are integrated using one of the following four schemes to obtain a single value of similarity sim(n i , n j ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integration of PLSI Results", "sec_num": "4.6" }, { "text": "-arithmetic mean: Integration results are shown in Fig. 9 , where the three sets of performance on the left are the results of single PLSI executions, i.e., before integration. On the right are the results after integration by the four schemes. It can be observed that integration improves the performance. More specifically, the results after integration are as good or better than any of the previous ones, except when using the minimum as a scheme.", "cite_spans": [], "ref_spans": [ { "start": 51, "end": 57, "text": "Fig. 9", "ref_id": null } ], "eq_spans": [], "section": "Integration of PLSI Results", "sec_num": "4.6" }, { "text": "sim(n i , n j ) = 1 N N k=1 sim k (n i , n j ) -geometric mean:sim(n i , n j ) = N N k=1 sim k (n i , n j ) -maximum: sim(n i , n j ) = max k sim k (n i , n j ) -minimum: sim(n i , n j ) = min k sim k (n i , n j )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integration of PLSI Results", "sec_num": "4.6" }, { "text": "An additional experiment was conducted that varied N from 1 to 10 to confirm that such performance improvement is always achieved by integration. Results are shown in Fig. 10 , which includes the average and maximum of the N PLSI results (unintegrated) as well as the performance after integration using arithmetic average as the scheme. The results show that the integration consistently improves the performance for all 2 \u2264 N \u2264 10. An increase of the integration performance was observed for N \u2264 5, whereas increases in the average and maximum of the unintegrated results were relatively low. It is also seen that using N > 5 has less effect for integration.", "cite_spans": [], "ref_spans": [ { "start": 167, "end": 174, "text": "Fig. 10", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Integration of PLSI Results", "sec_num": "4.6" }, { "text": "In this study, automatic synonym acquisition was performed using a latent semantic model PLSI by estimating the latent class distribution for each noun. For this purpose, co-occurrences of verbs and nouns extracted from a large corpus were utilized. Discrimination rates and scores were used to evaluate the current method, and it was found that PLSI outperformed such conventional methods as tf\u2022idf and LSI. These results make PLSI applicable for automatic thesaurus construction. Moreover, the following techniques were found effective: (1) employing Skew Divergence as the distance/similarity measure between probability distributions; (2) removal of words with low frequencies, and (3) multiple executions of PLSI and integration of the results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "As future work, the automatic extraction of the hierarchical relationship of words also plays an important role in constructing thesauri, although only synonym relationships were extracted this time. Many studies have been conducted for this purpose, but extracted hyponymy/hypernymy relations must be integrated in the synonym relations to construct a single thesaurus based on tree structure. The characteristics of the latent class distributions obtained by the current method may also be used for this purpose.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "In this study, similarity was calculated only for nouns, but one for verbs can be obtained using an identical method. This can be achieved by pairing noun n and case / preposition c of co-occurrence (v, c, n), not v and c as previously done, and executing PLSI for the dyadic data (v, (c, n)). By doing this, the latent class distributions for each verb v, and consequently the similarity between them, are obtained.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Moreover, although this study only deals with verb-noun co-occurrences, other information such as adjective-noun modifications or descriptions in dictionaries may be used and integrated. This will be an effective way to improve the performance of automatically constructed thesauri.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "R. Dale et al. (Eds.): IJCNLP 2005, LNAI 3651, pp. 334-345, 2005. c Springer-Verlag Berlin Heidelberg 2005", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Ones expressed as VB, VBD, VBG, VBN, VBP, and VBZ by the Penn Treebank POS tag set[15].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A gentle tutorial on the EM algorithm and its application to parameter estimation for gaussian mixture and hidden markov models", "authors": [ { "first": "J", "middle": [], "last": "Bilmes", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bilmes, J. 1997. A gentle tutorial on the EM algorithm and its application to parameter estimation for gaussian mixture and hidden markov models. Technical Report ICSI-TR-97- 021, International Computer Science Institute (ICSI), Berkeley, CA.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A maximum-entropy-inspired parser", "authors": [ { "first": "E", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 2000, "venue": "", "volume": "1", "issue": "", "pages": "132--139", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charniak, E. 2000. A maximum-entropy-inspired parser. NAACL 1, 132-139.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Collins Cobuild Major New Edition CD-ROM", "authors": [ { "first": "", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collins. 2002. Collins Cobuild Major New Edition CD-ROM. HarperCollins Publishers.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A new statistical parser based on bigram lexical dependencies", "authors": [ { "first": "M", "middle": [], "last": "Collins", "suffix": "" } ], "year": 1996, "venue": "Proc. of 34th ACL", "volume": "", "issue": "", "pages": "184--191", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collins, M. 1996. A new statistical parser based on bigram lexical dependencies. Proc. of 34th ACL, 184-191.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Indexing by Latent Semantic Analysis", "authors": [ { "first": "S", "middle": [], "last": "Deerwester", "suffix": "" } ], "year": 1990, "venue": "Journal of the American Society for Information Science", "volume": "41", "issue": "6", "pages": "391--407", "other_ids": {}, "num": null, "urls": [], "raw_text": "Deerwester, S., et al. 1990. Indexing by Latent Semantic Analysis. Journal of the American Society for Information Science, 41(6):391-407.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "WordNet: an electronic lexical database", "authors": [ { "first": "C", "middle": [], "last": "Fellbaum", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fellbaum, C. 1998. WordNet: an electronic lexical database. MIT Press.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Noun classification from predicate-argument structures", "authors": [ { "first": "D", "middle": [], "last": "Hindle", "suffix": "" } ], "year": 1990, "venue": "Proc. of the 28th Annual Meeting of the ACL", "volume": "", "issue": "", "pages": "268--275", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hindle, D. 1990. Noun classification from predicate-argument structures. Proc. of the 28th Annual Meeting of the ACL, 268-275.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Probabilistic Latent Semantic Indexing", "authors": [ { "first": "T", "middle": [], "last": "Hofmann", "suffix": "" } ], "year": 1999, "venue": "Proc. of the 22nd International Conference on Research and Development in Information Retrieval (SIGIR '99)", "volume": "", "issue": "", "pages": "50--57", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hofmann, T. 1999. Probabilistic Latent Semantic Indexing. Proc. of the 22nd International Conference on Research and Development in Information Retrieval (SIGIR '99), 50-57.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Unsupervised Learning by Probabilistic Latent Semantic Analysis. Machine Learning", "authors": [ { "first": "T", "middle": [], "last": "Hofmann", "suffix": "" } ], "year": 2001, "venue": "", "volume": "42", "issue": "", "pages": "177--196", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hofmann, T. 2001. Unsupervised Learning by Probabilistic Latent Semantic Analysis. Ma- chine Learning, 42:177-196.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Existence and Application of Common Threshold of the Degree of Association", "authors": [ { "first": "K", "middle": [], "last": "Kojima", "suffix": "" } ], "year": 2004, "venue": "Proc. of the Forum on Information Technology (FIT2004) F-003", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kojima, K., et. al. 2004. Existence and Application of Common Threshold of the Degree of Association. Proc. of the Forum on Information Technology (FIT2004) F-003.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "On the Effectiveness of the Skew Divergence for Statistical Language Analysis", "authors": [ { "first": "L", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2001, "venue": "Artificial Intelligence and Statistics", "volume": "", "issue": "", "pages": "65--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lee, L. 2001. On the Effectiveness of the Skew Divergence for Statistical Language Analysis. Artificial Intelligence and Statistics 2001, 65-72.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Divergence measures based on the shannon entropy", "authors": [ { "first": "J", "middle": [], "last": "Lin", "suffix": "" } ], "year": 1991, "venue": "IEEE Transactions on Information Theory", "volume": "37", "issue": "1", "pages": "140--151", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, J. 1991. Divergence measures based on the shannon entropy. IEEE Transactions on Information Theory, 37(1):140-151.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Probabilistic Representation of Meanings. IPSJ SIG-Notes Natural Language", "authors": [ { "first": "D", "middle": [], "last": "Mochihashi", "suffix": "" }, { "first": "Y", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2002, "venue": "", "volume": "147", "issue": "", "pages": "77--84", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mochihashi, D., Matsumoto, Y. 2002. Probabilistic Representation of Meanings. IPSJ SIG- Notes Natural Language, 2002-NL-147:77-84.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The National Institute of Japanese Language", "authors": [], "year": 2004, "venue": "Bunruigoihyo. Dainippontosho", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "The National Institute of Japanese Language. 2004. Bunruigoihyo. Dainippontosho.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Part-of-Speech Tagging Guidelines for the Penn Treebank Project", "authors": [ { "first": "B", "middle": [], "last": "Santorini", "suffix": "" } ], "year": 1990, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Santorini, B. 1990. Part-of-Speech Tagging Guidelines for the Penn Treebank Project. ftp://ftp.cis.upenn.edu/pub/treebank/doc/tagguide.ps.gz", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Probabilistic Part-of-Speech Tagging Using Decision Trees", "authors": [ { "first": "H", "middle": [], "last": "Schmid", "suffix": "" } ], "year": 1994, "venue": "Proc. of the First International Conference on New Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "44--49", "other_ids": {}, "num": null, "urls": [], "raw_text": "Schmid, H. 1994. Probabilistic Part-of-Speech Tagging Using Decision Trees. Proc. of the First International Conference on New Methods in Natural Language Processing (NemLap- 94), 44-49.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Deterministic annealing EM algorithm", "authors": [ { "first": "N", "middle": [], "last": "Ueda", "suffix": "" }, { "first": "R", "middle": [], "last": "Nakano", "suffix": "" } ], "year": 1998, "venue": "Neural Networks", "volume": "11", "issue": "", "pages": "271--282", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ueda, N., Nakano, R. 1998. Deterministic annealing EM algorithm. Neural Networks, 11:271-282.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Outline of our approach", "uris": null, "type_str": "figure" }, "FIGREF1": { "num": null, "text": "Test-sets for discrimination rate calculation base word: computer rank synonym sim sim * rel.(p) p \u2022 sim *", "uris": null, "type_str": "figure" }, "FIGREF2": { "num": null, "text": "Performances of distance/similarity measures Discrimination rate measured by varying threshold t f", "uris": null, "type_str": "figure" }, "FIGREF3": { "num": null, "text": "Performances of PLSI and conventional methods", "uris": null, "type_str": "figure" }, "FIGREF4": { "num": null, "text": "Integration results varying N", "uris": null, "type_str": "figure" }, "TABREF1": { "type_str": "table", "text": "John gave presents to his colleagues.", "html": null, "num": null, "content": "
(a) Original sentence(d) Co-occurrence extraction from dependencies
NP S VP
(b) Parsing resultJohn gave(\"give\", subj, \"John\")
S
VBDVP NP
VPgave presents(\"give\", obj, \"present\")
NPPPTO PP NP
NPNPgave to his colleagues
NNPVBDNNS TOPRP$NNS
John gave presents to his colleagues.
(c) Dependency structure
NP S VPVBDVPPPTOPPNP
VBDVPNP
[John] gave [presents] to [his colleagues]
" }, "TABREF2": { "type_str": "table", "text": "The numbers of pairs in the highly and unrelated test sets are 383 and 1,124, respectively.", "html": null, "num": null, "content": "
highly relatedunrelated
(answer, reply)(animal, coffee)
(phone, telephone)(him, technology)
(sign, signal)(track, vote)
(concern, worry)(path, youth)
\u2026\u2026
" } } } }