{ "paper_id": "O12-1027", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:03:06.631518Z" }, "title": "Context-Aware In-Page Search", "authors": [ { "first": "Yu-Hao", "middle": [], "last": "Lin", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "" }, { "first": "Yu-Lan", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "" }, { "first": "Tzu-Xi", "middle": [], "last": "Yen", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "" }, { "first": "Jason", "middle": [ "S" ], "last": "Chang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": {} }, "email": "jason.jschang@gmail.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper we introduce a method for searching appropriate articles from knowledge bases (e.g. Wikipedia) for a given query and its context. In our approach, this problem is transformed into a multi-class classification of candidate articles. The method involves automatically augmenting smaller knowledge bases using larger ones and learning to choose adequate articles based on hyperlink similarity between article and context. At run-time, keyphrases in given context are extracted and the sense ambiguity of query term is resolved by computing similarity of keyphrases between context and candidate articles. Evaluation shows that the method significantly outperforms the strong baseline of assigning most frequent articles to the query terms. Our method effectively determines adequate articles for given query-context pairs, suggesting the possibility of using our methods in context-aware search engines.", "pdf_parse": { "paper_id": "O12-1027", "_pdf_hash": "", "abstract": [ { "text": "In this paper we introduce a method for searching appropriate articles from knowledge bases (e.g. Wikipedia) for a given query and its context. In our approach, this problem is transformed into a multi-class classification of candidate articles. The method involves automatically augmenting smaller knowledge bases using larger ones and learning to choose adequate articles based on hyperlink similarity between article and context. At run-time, keyphrases in given context are extracted and the sense ambiguity of query term is resolved by computing similarity of keyphrases between context and candidate articles. Evaluation shows that the method significantly outperforms the strong baseline of assigning most frequent articles to the query terms. Our method effectively determines adequate articles for given query-context pairs, suggesting the possibility of using our methods in context-aware search engines.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Today we surf the Internet through search engines most of the time. With the explosive growth of web pages, the accuracy and relevancy of search results have become ever more important. Traditional search engines accept keywords, and return a page full of possible relevant results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Then users can click one of the results to visit the sites they are interested in. We call this type of search \"keyword-search\". Today, almost all search engines are keyword-based.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, various classes of results mixed in the search results. For example, when a user query the search engine with the keyword \"apple\", the search results comprise of two major class, \"Apple Inc.\", the computer company, and \"apple\", a kind of fruit. With only one keyword, even state-of-the-art keyword-based search engines could not distinguish between different search intents. Unlike keyword search, context-aware search assume each query is associated with a context. In this paper, we present a prototypical system, In-Page Search, that automatically extract context information and use them to disambiguate ambiguous queries. Users could select the terms they are interested in, and then with a click of the mouse, the In-Page Search system shows a pop-up window with the most relevant results for the given context.(See Figure 1.) In-Page Search is similar to the \"entity-linking problem\", which has long been an active research topic in IR and Database community. Entity-linking problem could be informally described as follows: given a knowledge base, in which every entry is an entity and its associated information. Given a mention and the context with the mention, determine the correct entity that the given mention really links to. For example, Figure 2 shows the mention \"John McCarthy\" and it's context, in a knowledge base, there are more than 10 entities which may be linked to \"John McCarthy\". The problem is determining the correct entity to link to.", "cite_spans": [ { "start": 831, "end": 841, "text": "Figure 1.)", "ref_id": null } ], "ref_spans": [ { "start": 1263, "end": 1271, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Intuitively, entity-linking could be considered a Named-Entity Disambiguation problem or more generally, a word sense disambiguation problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In our approach, we also exploit the cross-language features in multi-language knowledge bases. This method augments information in one language with other languages in the same knowledge base to cope with the data sparseness problem which may be a problem for a language with less data. We discuss this multi-language model and the definitions of various link-based similarity measures in Chapter 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "At run-time, In-Page Search starts with a query together with its context page submitted by the user. The system then extracts context terms and transforms them into machine-readable features. Finally, the system uses a SVM model (Chang and Lin, 2011) trained on a knowledge base to determine which entity in the knowledge base should be linked to the current query, and output a summarized abstract of this entity to the user. The results could be further augmented for other purposes. For example, for the input links to a geographic entity, we could show the location using a map application.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of this thesis is organized as follows. We review the related work in the following chapter. Then we describe our preprocessing and runtime algorithm in Chapter 3. We then report on the experimental setup and compare our results to various baselines in Chapter 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Conclusions are provided in Chapter 5 along with the directions of future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Search engines and related technology has long been an active research topic in information retrieval and natural language processing. Most modern search engines (e.g. Google, Bing, and Yahoo!) accept keyword or keyphrase as input. Today keyword search engines have excellent performance in terms of both results relevancy and response time. However, keyword search engines do not consider a query may come with a context, so they could not distinguish between different search intents. With the rise of the mobile web, some search engines have evolved to provide better user experience. One reprehensive example is the Google Now feature of mobile edition of Google. While accepting user's voice input, it extracts user's context information such as GPS location, user's schedule recorded on calendar application, and the contact information on user's cell phone. Thus, Google Now can analyze user's search intent and provide the most relevant information using these contexts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Previously, much effort has been made in research on word sense disambiguation based on machine learning (Black, 1988; Hearst, 1991; Leacock, Towell, and Voorhees, 1993; Bruce and Wiebe, 1994) . Yarowsky (Yarowsky, 1992) uses a Na\u00efve Bayesian classifier trained on Roget's thesaurus to classify words with given context into its sense category. They use class-based salient words list provided by Roget's thesaurus as features and tuning weight by counting the frequencies of surrounding salient words in context. While achieving high accuracy, this research can be viewed as prototypical framework of most machine learning WSD systems. These approaches often rely on sense-labeled corpus. Although supervised machine learning WSD algorithms frequently gives high performance, however, sense-labeled corpus is not always available. Compared to our approach, we use Wikipedia as our corpus, its cross-lingual nature enables us to augment smaller knowledge base with other languages.", "cite_spans": [ { "start": 105, "end": 118, "text": "(Black, 1988;", "ref_id": "BIBREF2" }, { "start": 119, "end": 132, "text": "Hearst, 1991;", "ref_id": "BIBREF13" }, { "start": 133, "end": 169, "text": "Leacock, Towell, and Voorhees, 1993;", "ref_id": "BIBREF17" }, { "start": 170, "end": 192, "text": "Bruce and Wiebe, 1994)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "An important branch of WSD is entity-linking. While WSD focuses on linking word to its correct sense given context, entity-linking systems focus on linking mentions of entities (often named-entities) to its correct entry in a given knowledge base. \"Wikify\" ( Compared to our system, most entity-linking system developed their method on English, so they could not directly apply to languages that need segmentation pre-processing. To apply our method to CJK languages, we use a scheme similar in (Milne and Witten, 2008) to transform context page into vector of context entities. In addition, we extend traditional link-based measure to a cross-lingual augmented knowledge base. To the best of our knowledge, such technique hasn't been shown in previous systems.", "cite_spans": [ { "start": 257, "end": 258, "text": "(", "ref_id": null }, { "start": 495, "end": 519, "text": "(Milne and Witten, 2008)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Understanding a user's search intent basing solely on query term (e.g., ) is a challenging task. Short query terms typically have more than one sense which leading to multiple entities in the knowledge base that could be linked to. To assign adequate entity for a given query, a promising method is to compute the similarity between a query's context and candidate entities' description, and returning the most similar entity (e.g. for ) in the context of a computer-related Chinese article.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": "3" }, { "text": "We focus on the essential step of determining user's search intent: choosing the appropriate Proceedings of the Twenty-Fourth Conference on Computational Linguistics and Speech Processing (ROCLING 2012) entity in the knowledge base for the given query. Once the entity has determined, the system returns information of this entity in various ways (e.g. text description, image, audio, video). In a Wikipedia-like knowledge base, we treat each document as an entity, its description page as the context, and hyperlinks in this page as query terms. With the hyperlinked nature of such a knowledge base, we train a classifier which estimate the similarity of link structure between each query term's context, and determine whether a query term and an entity (i.e. the article titles) should be linked together. Thus, the problem of context-aware search is transformed to an entity-linking problem. We now formally state the problem we are addressing by first giving a definition of Wikipedia-like knowledge base.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "3.1" }, { "text": "A Wikipedia-like knowledge base is a collection of documents, each document should describe an unique concept with hyperlinks, inter-wiki links and disambiguation pages which list possible sense of an ambiguous term.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "3.1" }, { "text": "We are given a set of Wikipedia-like knowledge bases KB={ kb 1 ,\u2026, kb n | n \u2265 1 } (e.g., {Chinese Wikipedia, English Wikipedia}), a query term q, a context document c of q, and a knowledge base kb j KB, where q should be searched. Our goal is to assign an adequate document e i , where e i kb j = {e 1 ,\u2026, e j } and e 1 ,\u2026, e j are candidate senses. For this, we compute the link structure similarity between each document pair (c, e), where e is in kb j , and then train a classifier to determine which (c, e) pair should be linked together.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement:", "sec_num": null }, { "text": "We attempt to resolve the sense ambiguity of a given query term by learning link structure characteristics from a collection of pairs in a Wikipedia-like knowledge base.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning to Link with Wikipedia-like Databases", "sec_num": "3.2" }, { "text": "Our learning process is shown in Figure 3 . ", "cite_spans": [], "ref_spans": [ { "start": 33, "end": 41, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Learning to Link with Wikipedia-like Databases", "sec_num": "3.2" }, { "text": "In the first stage of the learning process (Step (1) in Figure 3 ), we generate candidate pairs from KB. Once the candidate pairs have been computed and stored, the In-Page Search system could use them to efficiently retrieve possible entities of a given query, instead of comparing every e in KB. For example, given the query \" \", we retrieve { <\" \", \" \">, <\" \", \" \">, <\" \", \" ( )\">, <\" \",\"", "cite_spans": [], "ref_spans": [ { "start": 56, "end": 64, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Generate Candidate Term-Entity Pairs From Knowledge Base", "sec_num": "3.2.1" }, { "text": "\">}, and then, these four entities will be disambiguated. We compute these pairs from KB using a hyperlink's anchor text and its destination entity. The rationale behind computing pairs using anchor texts is that anchor texts reflect how people mentioning In", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generate Candidate Term-Entity Pairs From Knowledge Base", "sec_num": "3.2.1" }, { "text": "Step 1 for each ei in docs (4) links = GetLinks(ei) (5) title = GetTitle(ei) (6) if ei is Disambiguation Page (7) for each target in links (8) list += (9) else (10) for each in links (11) list += (12) hist=Histogram(list) (13) return hist title, link target> to a temp list. Otherwise we add to the temp list (Steps (6)~(11)). Finally, we compute the histogram of the temp list, where every entry is a pair and its frequency (Steps (12) ). An example of results is shown in Table 2 . ", "cite_spans": [ { "start": 27, "end": 30, "text": "(4)", "ref_id": "BIBREF3" }, { "start": 52, "end": 55, "text": "(5)", "ref_id": "BIBREF4" }, { "start": 77, "end": 80, "text": "(6)", "ref_id": "BIBREF5" }, { "start": 110, "end": 113, "text": "(7)", "ref_id": "BIBREF6" }, { "start": 139, "end": 142, "text": "(8)", "ref_id": "BIBREF7" }, { "start": 175, "end": 179, "text": "(10)", "ref_id": "BIBREF9" }, { "start": 214, "end": 218, "text": "(11)", "ref_id": "BIBREF10" }, { "start": 243, "end": 247, "text": "(12)", "ref_id": "BIBREF11" }, { "start": 269, "end": 273, "text": "(13)", "ref_id": "BIBREF12" }, { "start": 521, "end": 525, "text": "(12)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 563, "end": 570, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Generate Candidate Term-Entity Pairs From Knowledge Base", "sec_num": "3.2.1" }, { "text": "In the second stage of the learning algorithm (Step (2) in Figure 3 ), we augment each In a Wikipedia-like knowledge base, each article can be viewed as a concept (i.e. entity).", "cite_spans": [], "ref_spans": [ { "start": 59, "end": 67, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Augmenting Knowledge Base using Inter-Wiki Links", "sec_num": "3.2.2" }, { "text": "From hyperlinks in documents, we could build a directed graph of the entire knowledge base, in which nodes denote articles, the edge indicate an article mentions another via hyperlinks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Augmenting Knowledge Base using Inter-Wiki Links", "sec_num": "3.2.2" }, { "text": "Thus, out-going edges of a node point to other articles mentioned in the article represented by the node, while in-coming edges of a node indicate other articles mentioning the node. We call these two edges out-links and in-links respectively (See Figure 6. ). inter-link points to its corresponding entity in kb e . If the result is negative, we leave the current article unchanged without augmentation. In Step (5), we identify the corresponding article in kb e by looking at the target, e en of inter-wiki link of e cn . Then, we retrieve all out-links and in-links of e en and carry out the CombineLinks procedure with both kinds of links (Step (6), (7),", "cite_spans": [], "ref_spans": [ { "start": 248, "end": 257, "text": "Figure 6.", "ref_id": null } ], "eq_spans": [], "section": "Augmenting Knowledge Base using Inter-Wiki Links", "sec_num": "3.2.2" }, { "text": "). In the CombineLinks procedure, we iterate through all links in link en , and then determine if the link (i.e. lk en ) has an inter-link (Step (10)). If such an inter-link exists, we \"translate\" the link by replacing lk en with lk cn , a hyperlink point to destination of the inter-link and has anchor text of destination title. Finally we add the translated link to the original set of link (i.e. link cn ), and store them in database. Note that the link en is also stored in kb c (Step (14) ). We do that to support cross-lingual entity-linking. Once the augmentation has been done, each article in kb c has two link sets from each knowledge base. For articles with inter-links, the performance of entity-linking could be improved from the augmentation algorithm.", "cite_spans": [ { "start": 490, "end": 494, "text": "(14)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Augmenting Knowledge Base using Inter-Wiki Links", "sec_num": "3.2.2" }, { "text": "In the third and final stage of the learning process, we train a Link Similarity Model based on the link graph of Wikipedia-like knowledge base articles. To determine which entity to be linked given query term q, we compute link graph similarity between context c of q and candidate entities' articles, and transform them to feature vectors to train a binary SVM classifier. In the rest of this section, we first explain the Link Similarity Model, which is used to estimate the similarity between two entities, and show how we incorporate the Link Similarity Model with SVM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training the Binary SVM Model", "sec_num": "3.2.3" }, { "text": "Consider link graphs in Figure 6 . We compute similarity between two link graphs which procedure AugmentKB(kb c , kb e ) (1) docs = GetDocuments(kb c ) (2) for each e cn in docs (3) = (4) if InterlinkOf(e cn ) exists: (5) e en =GetDocument(kb e ,InterlinkOf(e cn )) (6) = 7CombineLinks(olinks cn ,olinks en ) (8) CombineLinks(ilinks cn ,ilinks en )", "cite_spans": [ { "start": 152, "end": 155, "text": "(2)", "ref_id": "BIBREF1" }, { "start": 178, "end": 181, "text": "(3)", "ref_id": "BIBREF2" }, { "start": 244, "end": 247, "text": "(4)", "ref_id": "BIBREF3" }, { "start": 278, "end": 281, "text": "(5)", "ref_id": "BIBREF4" }, { "start": 326, "end": 329, "text": "(6)", "ref_id": "BIBREF5" }, { "start": 429, "end": 432, "text": "(8)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 24, "end": 32, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Training the Binary SVM Model", "sec_num": "3.2.3" }, { "text": "procedure CombineLinks(link cn ,link en ) (9) for each lk en in link en : (10) if InterlinkOf(lk en ) exists: (11) lk cn =translate(lk en ,InterlinkOf(lk en )) (12) link cn +=lk cn (13) link en -=lk en (14) AddToKB()", "cite_spans": [ { "start": 74, "end": 78, "text": "(10)", "ref_id": "BIBREF9" }, { "start": 110, "end": 114, "text": "(11)", "ref_id": "BIBREF10" }, { "start": 160, "end": 164, "text": "(12)", "ref_id": "BIBREF11" }, { "start": 181, "end": 185, "text": "(13)", "ref_id": "BIBREF12" }, { "start": 202, "end": 206, "text": "(14)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Training the Binary SVM Model", "sec_num": "3.2.3" }, { "text": "has vertices v a , v b as central node respectively using following equations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training the Binary SVM Model", "sec_num": "3.2.3" }, { "text": "(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training the Binary SVM Model", "sec_num": "3.2.3" }, { "text": "In Eq. (1) E a , E b denote the edges of v a , v b respectively. The interpretation of Eq. 1is that we compute the number of edges in common with both vertices respectively, and normalize it using edges of smaller graph constructed from v a and v b . In order to make range of Eq. (1) lies in [0, 1], we choose to normalize by smaller graph. Thus, bigger value means bigger similarity between two vertices.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training the Binary SVM Model", "sec_num": "3.2.3" }, { "text": "Given training data, we use Eq. (1) to compute features from training data and use them to train a binary SVM classifier. The procedure is shown in Figure 8 . In the computation of link similarity, notice that since the knowledge base has been augmented in 3.2.2, each articles has two link sets. We utilize a set of constant coefficient <\u03b1 1 , \u03b1 2 , \u03b1 3 > to interpolate between similarity computed from , where link cn0 is the unaugmented link set of kb cn . Finally we examine whether the target of term's hyperlink equals entity, if the result is positive, we add the current feature vector to the input of SVM with positive example, otherwise with negative example (Steps (6)~(9)).", "cite_spans": [], "ref_spans": [ { "start": 148, "end": 156, "text": "Figure 8", "ref_id": "FIGREF8" } ], "eq_spans": [], "section": "Training the Binary SVM Model", "sec_num": "3.2.3" }, { "text": "procedure GenerateSVMInput(kb) (1) = RandomTermArticles(kb) (2) for each in 3candidates = GetTermEntity(term) (4) for each in candidates (5) = extractFeatures(article, entity) (6) if entity==TargetOf(term) 7AddToOutput(<1, lp, olinkSim, ilink>) (8) else (9) AddToOutput(<0, lp, olinkSim, ilink>) 10", "cite_spans": [ { "start": 31, "end": 34, "text": "(1)", "ref_id": "BIBREF0" }, { "start": 77, "end": 80, "text": "(2)", "ref_id": "BIBREF1" }, { "start": 161, "end": 164, "text": "(4)", "ref_id": "BIBREF3" }, { "start": 203, "end": 206, "text": "(5)", "ref_id": "BIBREF4" }, { "start": 267, "end": 270, "text": "(6)", "ref_id": "BIBREF5" }, { "start": 345, "end": 348, "text": "(9)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Training the Binary SVM Model", "sec_num": "3.2.3" }, { "text": "Once the SVM model is constructed, we are ready to classify or disambiguate query terms to corresponding entities in KB. We associate adequate entities with given query terms and context using the procedures in Figure 9 . Figure 9 . Classification algorithm at run-time.", "cite_spans": [], "ref_spans": [ { "start": 211, "end": 219, "text": "Figure 9", "ref_id": null }, { "start": 222, "end": 230, "text": "Figure 9", "ref_id": null } ], "eq_spans": [], "section": "Run-Time Entity Linking", "sec_num": "3.3" }, { "text": "In", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Run-Time Entity Linking", "sec_num": "3.3" }, { "text": "Step (1) of ClassifyTerm procedure, we transform given context into an entity containing out-link set and in-link set, thus the link similarity measure could be applied. In", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Run-Time Entity Linking", "sec_num": "3.3" }, { "text": "TransformContext procedure, we first split the context into N-grams, and then do a longest possible match with the pairs of kb computed in 3.2.1. For every N-gram there may be more than one matching pairs, we choose the one with highest frequency. Then we iterate through the matched terms (Step (2)), and then retrieve the corresponding entity (Step(3)), finally in Step (4) we make a union on the entity's link sets with the output, ctxEntity's link set, which is initialize as empty set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Run-Time Entity Linking", "sec_num": "3.3" }, { "text": "We now return to the ClassifyTerm procedure. Once we get the transformed context entity, in", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Run-Time Entity Linking", "sec_num": "3.3" }, { "text": "Step (2) we retrieve the candidates pairs where \"Term\" equals the query term q. For each entities in the candidate list, we compute feature vectors, where the first element is the link probability of current entity, the second and third elements are computed using eq. (1) with entity and context entity as input (Step (4)). After that we run the SVM model trained in 3.2.3 to predict the results, if it is positive, we add this entity to the result candidates list, otherwise we continue the iteration. After the end of the iteration, we select the one with highest link probability as the result entity to be linked (Steps (9)~(12)).", "cite_spans": [ { "start": 284, "end": 287, "text": "(1)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Run-Time Entity Linking", "sec_num": "3.3" }, { "text": "procedure ClassifyTerm(q, context, kb) (1) ctxEntity = transformContext(context, kb) (2) candidates = GetCandidateEntities(q) (3) for each entity in candidates (4) feature = (5) if SVMPredict(feature) is positive (6) AddToResultCandidate(entity) 7else (8) continue (9) if ResultCandidate is empty (10) return \"No entity could be linked\" (11) else (12) return MaxLinkProb(ResultCandidate)", "cite_spans": [ { "start": 85, "end": 88, "text": "(2)", "ref_id": "BIBREF1" }, { "start": 126, "end": 129, "text": "(3)", "ref_id": "BIBREF2" }, { "start": 160, "end": 163, "text": "(4)", "ref_id": "BIBREF3" }, { "start": 251, "end": 254, "text": "(5)", "ref_id": "BIBREF4" }, { "start": 290, "end": 293, "text": "(6)", "ref_id": "BIBREF5" }, { "start": 329, "end": 332, "text": "(8)", "ref_id": "BIBREF7" }, { "start": 342, "end": 345, "text": "(9)", "ref_id": "BIBREF8" }, { "start": 374, "end": 378, "text": "(10)", "ref_id": "BIBREF9" }, { "start": 414, "end": 418, "text": "(11)", "ref_id": "BIBREF10" }, { "start": 424, "end": 428, "text": "(12)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Run-Time Entity Linking", "sec_num": "3.3" }, { "text": "Procedure TransformContext(context, kb) (1) terms = LongestPossibleMostFrequentMatch(context, kb) (2) for each terms in terms: (3) entity = GetEntity(term) (4) CombineToCtxEntity() (5) return ctxEntity", "cite_spans": [ { "start": 98, "end": 101, "text": "(2)", "ref_id": "BIBREF1" }, { "start": 156, "end": 159, "text": "(4)", "ref_id": "BIBREF3" }, { "start": 213, "end": 216, "text": "(5)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Run-Time Entity Linking", "sec_num": "3.3" }, { "text": "The proposed Link Similarity Model and knowledge base augmentation method was designed to resolve the sense ambiguity of given query terms and to leverage broader information from larger knowledge base. As such, our models will be trained on query terms and their target entities. In this thesis we treat hyperlinks and their destination in Wikipedia as query terms and target entities. Using such data, we compiled datasets from Chinese Wikipedia for training and evaluation. In this chapter, we first present the training and test data for the evaluation (Section ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setting", "sec_num": "4" }, { "text": "In this thesis we focus on linking Chinese query terms to articles in Chinese Wikipedia.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Set", "sec_num": "4.1" }, { "text": "We used the Chinese Wikipedia XML file dumped at 20120503 as our main knowledge base.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Set", "sec_num": "4.1" }, { "text": "For the augmentation algorithm, we used 20120502 version of English Wikipedia to augment Chinese Wikipedia. Some statistics are shown in Table 3 . Currently English Wikipedia is far more larger than Chinese Wikipedia, no matter in numbers of articles, numbers of language-links or average sense ambiguity. Notice that the sense ambiguity is lower in Chinese. To better investigate our algorithms, we compiled a collection of pairs from Chinese Wikipedia with two criteria:", "cite_spans": [], "ref_spans": [ { "start": 137, "end": 144, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Data Set", "sec_num": "4.1" }, { "text": "1. The sense ambiguity of hyperlink's anchor text (i.e. query terms) should not be too low or high. Lower ambiguity leads to easier datasets for our classifier, while extremely high value makes running time exponential longer, which is unacceptable for a real-time system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Set", "sec_num": "4.1" }, { "text": "We set this value to lie in [2, 7] in our experiment.", "cite_spans": [ { "start": 28, "end": 31, "text": "[2,", "ref_id": "BIBREF1" }, { "start": 32, "end": 34, "text": "7]", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Data Set", "sec_num": "4.1" }, { "text": "The contexts (i.e. articles) where each hyperlink appeared should not be too lengthy. Our", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "Link Similarity Model uses hyperlinks information in context. In Wikipedia some special pages such as Lists pages, which lists instances of entities, contain extremely many hyperlinks that introduce too much noise to our model. In our implementation we make a threshold on number of hyperlinks per article to lower than 50. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "The proposed method starts with a query term and its textual context, and determines a suitable entity (i.e. article) for the query term in Chinese Wikipedia. The output of our system is the linked article from Chinese Wikipedia.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methods Compared", "sec_num": "4.2" }, { "text": "In this thesis, we proposed a method for augmenting the smaller Wikipedia-like knowledge base (CN) using larger knowledge base (EN). In addition, we propose a model for computing link structure similarity between two hyperlinked articles, and then use it to train a SVM classifier, in which we use out-links (OL) and in-links (IL) as features. Further, the link probability (LP) is used as a feature to balance the system performance between rare and common entities. To inspect the effectiveness of the augmentation method and these modules in more detail, the baseline and the combinations of the three main modules, OL, IL, and LP, evaluated in our experiments are described as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methods Compared", "sec_num": "4.2" }, { "text": "LP:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methods Compared", "sec_num": "4.2" }, { "text": "We train the SVM model using only link probability, and we use this model as ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methods Compared", "sec_num": "4.2" }, { "text": "In this section, we report the evaluation results of the experiments on the methodology described in the previous chapter. Table 4 . shows the results evaluated on the testing data consist of 2965 . classification strategy can effectively return the most compatible entity to a given query term.", "cite_spans": [], "ref_spans": [ { "start": 123, "end": 130, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Evaluation Results", "sec_num": "4.3" }, { "text": "As identified in previous related research (Milhacea et al., 2007; Milne et al., 2008) , the baseline LP is extremely effective for determining suitable English Wikipedia articles for ambiguous query terms, in our experiment performed using Chinese Wikipedia, this is also the case.", "cite_spans": [ { "start": 43, "end": 66, "text": "(Milhacea et al., 2007;", "ref_id": null }, { "start": 67, "end": 86, "text": "Milne et al., 2008)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Results", "sec_num": "4.3" }, { "text": "Comparing the two full models (i.e. OL+IL+LP), the results on CN and CN+EN indicate that our augmentation process provides a small performance improvement. Although the augmentation process does not greatly improve the performance, we perform 10-fold cross validation on another test set consisting of 3001 pairs and found that the performance gain is statistically significant.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Results", "sec_num": "4.3" }, { "text": "In general, there is no significant difference between average number of in-links and out-links, so the number of links does not explain this phenomena. We suggest that in Wikipedia, in-links reflect topics that mention an entity, while out-links reflect context terms of a certain entity. Since topics are more stable than context term, the performance influenced by in-links are stronger.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Results", "sec_num": "4.3" }, { "text": "In sum, our model achieved impressive performance for linking query terms to articles in Chinese Wikipedia. The augmentation process further significantly improve performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Results", "sec_num": "4.3" }, { "text": "Many avenues exist for future research and improvement of our system. For example, more features used in training the classification models could be added to boost system performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Works", "sec_num": "5" }, { "text": "To improve our system, language features such as collocations, N-gram counts, or part-of-speech could be added. Additionally, an interesting direction to explore is to apply our model to cross-language entity-linking. To support cross-language entity-linking, we could also augment the pairs described in 3.2.1 using similar augmentation process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Works", "sec_num": "5" }, { "text": "Once the augmentation has been done, we could cross-link a term to other knowledge base. For example, \" \" in Chinese Wikipedia may be linked to \"Big Apple\", the nickname of New York city, in English Wikipedia.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Works", "sec_num": "5" }, { "text": "In summary, we have introduced a method for linking a pair to an appropriate article in Chinese Wikipedia. Our goal is to improve user experience so that the underlying search system could distinguish between different search intents based on the context. The method involves possible candidates construction, knowledge base augmentation via inter-links, computation of various link similarity measures, and multi-class classification using binary SVM classifier. We have implemented and thoroughly evaluated the method as applied to linking query terms to Chinese Wikipedia articles. In our evaluation, we have shown Proceedings of the Twenty-Fourth Conference on Computational Linguistics and Speech Processing (ROCLING 2012) that the augmentation process slightly improved system performance. In addition, our full model significantly outperforms the strong baseline in terms of entity accuracy.", "cite_spans": [ { "start": 735, "end": 749, "text": "(ROCLING 2012)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Works", "sec_num": "5" }, { "text": "Proceedings of the Twenty-Fourth Conference on Computational Linguistics and Speech Processing(ROCLING 2012)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Word Sense Disambiguation using Conceptual Density. 16th Conference on Computational Linguistics", "authors": [ { "first": "E", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "G", "middle": [], "last": "Rigau", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "16--22", "other_ids": {}, "num": null, "urls": [], "raw_text": "Agirre, E., and Rigau, G. (1996). Word Sense Disambiguation using Conceptual Density. 16th Conference on Computational Linguistics, (pp. 16-22). Copenhagen.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "An Adapted Lesk Algorithm for Word Sense Disambiguation Using WordNet. the Third International Conference on Intelligent Text Processing and Computational Linguistics", "authors": [ { "first": "S", "middle": [], "last": "Banerjee", "suffix": "" }, { "first": "T", "middle": [], "last": "Pedersen", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Banerjee, S., and Pedersen, T. (2002). An Adapted Lesk Algorithm for Word Sense Disambiguation Using WordNet. the Third International Conference on Intelligent Text Processing and Computational Linguistics. Mexico City.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "An Experiment in Computational Discrimination of English Word Senses", "authors": [ { "first": "E", "middle": [ "W" ], "last": "Black", "suffix": "" } ], "year": 1988, "venue": "IBM Journal of Research and Development", "volume": "", "issue": "", "pages": "185--194", "other_ids": {}, "num": null, "urls": [], "raw_text": "Black, E. W. (1988). An Experiment in Computational Discrimination of English Word Senses. IBM Journal of Research and Development , 185-194.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Word-Sense Disambiguation Using Decomposable Models. 32nd Annual Meeting of the Association for Computational Linguistics", "authors": [ { "first": "R", "middle": [], "last": "Bruce", "suffix": "" }, { "first": "J", "middle": [], "last": "Wiebe", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "139--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bruce, R., and Wiebe, J. (1994). Word-Sense Disambiguation Using Decomposable Models. 32nd Annual Meeting of the Association for Computational Linguistics (pp. 139-146). Las Cruces: Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Improving Statistical Machine Translation using Word Sense Disambiguation", "authors": [ { "first": "M", "middle": [], "last": "Carpaut", "suffix": "" }, { "first": "D", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2007, "venue": "Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "61--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carpaut, M., and Wu, D. (2007). Improving Statistical Machine Translation using Word Sense Disambiguation. 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (pp. 61-72). Prague: Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Word Sense Disambiguation Improves Statistical Machine Translation. the Association for Computational Linguistics (ACL)", "authors": [ { "first": "Y", "middle": [ "S" ], "last": "Chan", "suffix": "" }, { "first": "H", "middle": [ "T" ], "last": "Ng", "suffix": "" }, { "first": "D", "middle": [], "last": "Chiang", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "33--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chan, Y. S., Ng, H. T., and Chiang, D. (2007). Word Sense Disambiguation Improves Statistical Machine Translation. the Association for Computational Linguistics (ACL), (pp. 33-40).", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Building a Chinese WordNet via Class-based Translation Model", "authors": [ { "first": "J", "middle": [ "S" ], "last": "Chang", "suffix": "" }, { "first": "T", "middle": [], "last": "Lin", "suffix": "" }, { "first": "G.-N", "middle": [], "last": "You", "suffix": "" }, { "first": "T", "middle": [ "C" ], "last": "Chuang", "suffix": "" }, { "first": "C.-T", "middle": [], "last": "Hsieh", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics and Chinese Language Processing", "volume": "", "issue": "", "pages": "61--76", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chang, J. S., Lin, T., You, G.-N., Chuang, T. C., and Hsieh, C.-T. (2003). Building a Chinese WordNet via Class-based Translation Model. Computational Linguistics and Chinese Language Processing , 61-76.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "LIBSVM: A library for support vector machines", "authors": [ { "first": "Chang", "middle": [], "last": "Cc", "suffix": "" }, { "first": "C", "middle": [ "J" ], "last": "Lin", "suffix": "" } ], "year": 2011, "venue": "ACM Transactions on Intelligent Systems and Technology (TIST)", "volume": "2", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chang CC and Lin CJ. 2011. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology (TIST) 2(3):27.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The google similarity distance. Knowledge and Data Engineering", "authors": [ { "first": "Cilibrasi", "middle": [], "last": "Rl", "suffix": "" }, { "first": "Pmb", "middle": [], "last": "Vitanyi", "suffix": "" } ], "year": 2007, "venue": "IEEE Transactions on", "volume": "19", "issue": "3", "pages": "370--83", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cilibrasi RL and Vitanyi PMB. 2007. The google similarity distance. Knowledge and Data Engineering, IEEE Transactions on 19(3):370-83.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "An Unsupervised Method for Word Sense Tagging using Parallel Corpora", "authors": [ { "first": "M", "middle": [], "last": "Diab", "suffix": "" }, { "first": "P", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 2002, "venue": "the 40th Annual Meeting of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "255--262", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diab, M., and Resnik, P. (2002). An Unsupervised Method for Word Sense Tagging using Parallel Corpora. the 40th Annual Meeting of the Association for Computational Linguistics (ACL), (pp. 255-262). Philadelphia.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Using Bilingual Materials to Develop Word Sense Disambiguation Methods. the International Conference on Theoretical and Methodological Issues in Machine Translation", "authors": [ { "first": "W", "middle": [ "A" ], "last": "Gale", "suffix": "" }, { "first": "K", "middle": [ "W" ], "last": "Church", "suffix": "" }, { "first": "Yarowsky", "middle": [], "last": "", "suffix": "" }, { "first": "D", "middle": [], "last": "", "suffix": "" } ], "year": 1992, "venue": "", "volume": "", "issue": "", "pages": "101--112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gale, W. A., Church, K. W., and Yarowsky, D. (1992). Using Bilingual Materials to Develop Word Sense Disambiguation Methods. the International Conference on Theoretical and Methodological Issues in Machine Translation, (pp. 101-112).", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "ImprovingWord Sense Disambiguation in Lexical Chaining", "authors": [ { "first": "M", "middle": [], "last": "Galley", "suffix": "" }, { "first": "K", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 2003, "venue": "18th International Joint Conference on Artificial Intelligence (IJCAI 2003)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Galley, M., and McKeown, K. (2003). ImprovingWord Sense Disambiguation in Lexical Chaining. 18th International Joint Conference on Artificial Intelligence (IJCAI 2003). Acapulco.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "GermaNet -a Lexical-Semantic Net for German. ACL workshop Automatic Information Extraction and Building of Lexical Semantic Resources for NLP Applications", "authors": [ { "first": "B", "middle": [], "last": "Hamp", "suffix": "" }, { "first": "H", "middle": [], "last": "Feldweg", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "9--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hamp, B., and Feldweg, H. (1997). GermaNet -a Lexical-Semantic Net for German. ACL workshop Automatic Information Extraction and Building of Lexical Semantic Resources for NLP Applications, (pp. 9-15). Madrid.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Noun Homograph Disambiguation using Local Context in Large Corpora", "authors": [ { "first": "M", "middle": [ "A" ], "last": "Hearst", "suffix": "" } ], "year": 1991, "venue": "7th Annual Conference of the University of Waterloo Centre for the New OED and Text Research", "volume": "", "issue": "", "pages": "1--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hearst, M. A. (1991). Noun Homograph Disambiguation using Local Context in Large Corpora. 7th Annual Conference of the University of Waterloo Centre for the New OED and Text Research, (pp. 1-15).", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Semi-Automatic Construction of Chinese WordNet -Using Class-based Translation Model", "authors": [ { "first": "C.-T", "middle": [], "last": "Hsieh", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hsieh, C.-T. (2000). Semi-Automatic Construction of Chinese WordNet -Using Class-based Translation Model.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A Thesaurus-based Semantic Classification of English Collocations", "authors": [ { "first": "C.-C", "middle": [], "last": "Huang", "suffix": "" }, { "first": "C.-H", "middle": [], "last": "Tseng", "suffix": "" }, { "first": "K", "middle": [ "H" ], "last": "Kao", "suffix": "" }, { "first": "J", "middle": [ "S" ], "last": "Chang", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "38--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huang, C.-C., Tseng, C.-H., Kao, K. H., and Chang, J. S. (2008). A Thesaurus-based Semantic Classification of English Collocations. ROCLING 2008, (pp. 38-52). Taipei.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Sinica BOW (Bilingual Ontological Wordnet): Integration of Bilingual WordNet and SUMO. 4th International Conference on Language Resources and Evaluation (LREC2004)", "authors": [ { "first": "C.-R", "middle": [], "last": "Huang", "suffix": "" }, { "first": "R.-Y", "middle": [], "last": "Chang", "suffix": "" }, { "first": "H.-P", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "1553--1556", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huang, C.-R., Chang, R.-Y., and Lee, H.-P. (2004). Sinica BOW (Bilingual Ontological Wordnet): Integration of Bilingual WordNet and SUMO. 4th International Conference on Language Resources and Evaluation (LREC2004), (pp. 1553-1556). Lisbon.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Corpus-based Statistical Sense Resolution", "authors": [ { "first": "C", "middle": [], "last": "Leacock", "suffix": "" }, { "first": "G", "middle": [], "last": "Towell", "suffix": "" }, { "first": "E", "middle": [], "last": "Voorhees", "suffix": "" } ], "year": 1993, "venue": "ARPA Human Language Technology Workshop", "volume": "", "issue": "", "pages": "260--265", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leacock, C., Towell, G., and Voorhees, E. (1993). Corpus-based Statistical Sense Resolution. ARPA Human Language Technology Workshop, (pp. 260-265).", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Automatic Sense Disambiguation using Machine Readable Dictionaries: How to Tell a Pine Cone from an Ice Cream Cone", "authors": [ { "first": "M", "middle": [], "last": "Lesk", "suffix": "" } ], "year": 1986, "venue": "5th Annual International Conference on Systems Documentation", "volume": "", "issue": "", "pages": "24--26", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lesk, M. (1986). Automatic Sense Disambiguation using Machine Readable Dictionaries: How to Tell a Pine Cone from an Ice Cream Cone. 5th Annual International Conference on Systems Documentation (pp. 24-26). Toronto: Association for Computing Machinery.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A Method for Word Sense Disambiguation of Unrestricted Text", "authors": [ { "first": "R", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "D", "middle": [ "I" ], "last": "Moldovan", "suffix": "" } ], "year": 1999, "venue": "the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics", "volume": "", "issue": "", "pages": "152--158", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mihalcea, R., and Moldovan, D. I. (1999). A Method for Word Sense Disambiguation of Unrestricted Text. the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics (pp. 152-158). College Park: Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Introduction to WordNet: An On-line Lexical Database", "authors": [ { "first": "G", "middle": [ "A" ], "last": "Miller", "suffix": "" }, { "first": "R", "middle": [], "last": "Beckwith", "suffix": "" }, { "first": "C", "middle": [], "last": "Fellbaum", "suffix": "" }, { "first": "D", "middle": [], "last": "Gross", "suffix": "" }, { "first": "K", "middle": [ "J" ], "last": "Miller", "suffix": "" } ], "year": 1990, "venue": "International Journal of Lexicography", "volume": "", "issue": "", "pages": "235--244", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miller, G. A., Beckwith, R., Fellbaum, C., Gross, D., and Miller, K. J. (1990). Introduction to WordNet: An On-line Lexical Database. International Journal of Lexicography , pp. 235-244.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Topic indexing with wikipedia", "authors": [ { "first": "O", "middle": [], "last": "Medelyan", "suffix": "" }, { "first": "I", "middle": [ "H" ], "last": "Witten", "suffix": "" }, { "first": "D", "middle": [], "last": "Milne", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the AAAI WikiAI workshop", "volume": "19", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Medelyan O., Witten I. H. and Milne D. 2008. Topic indexing with wikipedia. Proceedings of the AAAI WikiAI workshop, AAAI Press. 19 p.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Wikify!: Linking documents to encyclopedic knowledge", "authors": [ { "first": "R", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "A", "middle": [], "last": "Csomai", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the sixteenth ACM conference on conference on information and knowledge management", "volume": "233", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mihalcea R. and Csomai A. 2007. Wikify!: Linking documents to encyclopedic knowledge. Proceedings of the sixteenth ACM conference on conference on information and knowledge management. 233 p.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Computing semantic relatedness using wikipedia link structure", "authors": [ { "first": "D", "middle": [], "last": "Milne", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the new zealand computer science research student conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Milne D. 2007. Computing semantic relatedness using wikipedia link structure. Proceedings of the new zealand computer science research student conference.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Learning to link with wikipedia", "authors": [ { "first": "D", "middle": [], "last": "Milne", "suffix": "" }, { "first": "I", "middle": [ "H" ], "last": "Witten", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 17th ACM conference on information and knowledge management", "volume": "509", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Milne D. and Witten I. H. 2008. Learning to link with wikipedia. Proceedings of the 17th ACM conference on information and knowledge management, ACM. 509 p.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "The Informative Role of WordNet in Open-Domain Question Answering", "authors": [ { "first": "M", "middle": [], "last": "Pasca", "suffix": "" }, { "first": "S", "middle": [ "M" ], "last": "Harabagiu", "suffix": "" } ], "year": 2001, "venue": "Workshop on WordNet and Other Lexical Resources: Applications, Extensions, and Customizations", "volume": "", "issue": "", "pages": "138--143", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pasca, M., and Harabagiu, S. M. (2001). The Informative Role of WordNet in Open-Domain Question Answering. NAACL 2001 Workshop on WordNet and Other Lexical Resources: Applications, Extensions, and Customizations, (pp. 138-143). Pittsburgh.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Disambiguating Highly Ambiguous Words. Computational Linguistics", "authors": [ { "first": "G", "middle": [], "last": "Towell", "suffix": "" }, { "first": "E", "middle": [ "M" ], "last": "Voorhees", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "125--145", "other_ids": {}, "num": null, "urls": [], "raw_text": "Towell, G., and Voorhees, E. M. (1998). Disambiguating Highly Ambiguous Words. Computational Linguistics , 125-145.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "The TREC-8 Question Answering Track Evaluation", "authors": [ { "first": "E", "middle": [ "M" ], "last": "Voorhees", "suffix": "" }, { "first": "D", "middle": [ "M" ], "last": "Tice", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "84--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Voorhees, E. M., and Tice, D. M. (1999). The TREC-8 Question Answering Track Evaluation. TREC-8, (pp. 84-106).", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Introduction to EuroWordNet", "authors": [ { "first": "P", "middle": [], "last": "Vossen", "suffix": "" } ], "year": 1998, "venue": "Computers and the Humanities", "volume": "", "issue": "", "pages": "73--89", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vossen, P. (1998). Introduction to EuroWordNet. Computers and the Humanities , 73-89.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "A Syntax-Lexical Semantics Interface Analysis of Collocation Errors", "authors": [ { "first": "D", "middle": [], "last": "Wible", "suffix": "" }, { "first": "C.-H", "middle": [], "last": "Kuo", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wible, D., and Kuo, C.-H. (2001). A Syntax-Lexical Semantics Interface Analysis of Collocation Errors. Pacific Second Language Research Forum.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "An example of context-aware search", "uris": null, "type_str": "figure", "num": null }, "FIGREF1": { "text": "The mention \"John McCarthy\" and its context.", "uris": null, "type_str": "figure", "num": null }, "FIGREF2": { "text": "Mihalcea and Csomai, 2007; Milne and Witten, 2008) is an example of entity-linking systems. These systems automatically augment user's input texts with hyperlinks to Wikipedia entries. For example, imagine Figure 2 with links removed, these systems will automatically detect them with anchors links to proper Wikipedia articles (e.g. John McCarthy in Figure 2 links to John McCarthy (computer scientist) in Wikipedia.). Mihalcea's system decomposes these task into two procedural: keyphrase extraction and word sense disambiguation. They achieve WSD by computing various linguistic features except the \"Keyphraseness\": how frequently one phrase in Wikipedia being hyperlinks. Milne and Witten's system disambiguates mentions by incorporating more link-based measures. They apply normalized Google Distance (Cilibrasi and Vitanyi, 2007) to compute relatedness between two Wikipedia articles, and training machine learning models. Unlike Mihalcea's system, they first disambiguate possible candidates in input document, and then use information from this pass of disambiguation to aid keyphrase extraction. Their system has good performance both on Wikipedia articles and wild-life news pages.", "uris": null, "type_str": "figure", "num": null }, "FIGREF3": { "text": "Outline of the training process.", "uris": null, "type_str": "figure", "num": null }, "FIGREF4": { "text": "An input document from an Wikipedia-like knowledge base Table 1. Samples of pairs constructed from Figure 4. The output of this stage is a collection of pairs of a certain knowledge base. Some pairs, automatically constructed, are shown inTable 1.Figure 5shows the algorithm for computing pairs from a Wikipedia-like database.", "uris": null, "type_str": "figure", "num": null }, "FIGREF5": { "text": "Generating pairs.", "uris": null, "type_str": "figure", "num": null }, "FIGREF6": { "text": "A link graph. Blue edges denote outlinks, green edges denote inlinks, orange edges denote both inlinks and outlinks. The input of this stage is two Wikipedia-like knowledge bases (e.g. , we augment the first knowledge base using the second one. The output of this stage is an augmented knowledge base, in which each document is augmented. Proceedings of the Twenty-Fourth Conference on Computational Linguistics and Speech Processing (ROCLING 2012) The augmentation process.", "uris": null, "type_str": "figure", "num": null }, "FIGREF7": { "text": "shows the knowledge base augmenting process. In Step (1) of the algorithm, we retrieve the list of all articles in kb c . For each article, we first examine whether it has an", "uris": null, "type_str": "figure", "num": null }, "FIGREF8": { "text": "Training SVM Classifier.InStep(1) we retrieve a list of pairs in which Term is an anchor text of randomly chosen hyperlink in Article, a randomly chosen article from kb. We treat Terms as query terms, and Articles as their contexts. Using pairs computed in 3.2.1, we can get candidates pairs (Step (3)). Then we iterate through them (Step (4)). In Step (5), for each pairs, we extract three features from them: lp: The link probability defined as P(Entity|Term), which could be easily computed since we have stored the histograms in 3.2.1.olinkSim:The link similarity considering only outlinks, i.e. Sim l (article, entity). ilinkSim: Likewise, the link similarity by considering only inlinks.", "uris": null, "type_str": "figure", "num": null }, "FIGREF9": { "text": ". Then, Section 4.2 lists the methods we use in comparison. Section 4.3 introduces the evaluation metrics. Finally, we report the settings of the parameters in Section 4.4.", "uris": null, "type_str": "figure", "num": null }, "FIGREF10": { "text": "The full model trained using out-links, in-links, and link probability without augmentation. OL+IL+LP (CN+EN): The most complete version of proposed system, using all features and augmentation process. -LP (CN+EN): The full model with augmentation minus the link probability feature. -OL (CN+EN): The full model with augmentation minus the out-links feature. -IL (CN+EN): The full model with augmentation minus the in-links feature.", "uris": null, "type_str": "figure", "num": null }, "TABREF0": { "content": "
(1)Generate Candidate Term-Entity Pairs From Knowledge Base (Section 3.2.1)
(2)Augment Knowledge Bases by Inter-Wiki Links (Section 3.2.2)
(3)Train Binary SVM Classification Model (Section 3.2.3)
", "html": null, "text": "Proceedings of the Twenty-Fourth Conference on Computational Linguistics and Speech Processing(ROCLING 2012) entities in written articles.The input to this stage is a set of Wikipedia-like knowledge base KB. With their hyperlinked nature, we could compute pairs easily. To provide broader coverage of query, we also take into account the redirect links and disambiguation pages.", "type_str": "table", "num": null }, "TABREF2": { "content": "
TermEntity
", "html": null, "text": " pairs of ' '", "type_str": "table", "num": null }, "TABREF4": { "content": "
Chinese WikipediaEnglish Wikipedia
Number of articles482,0954,485,110
Percentage of language links67%9%
Average sense ambiguity3.16.7
Using these criteria we randomly chosen 501 distinct <hyperlink, article> pairs from
Chinese Wikipedia as our training data, and another distinct 2965 <hyperlink, article> pairs as
testing data.
", "html": null, "text": "Statistics of Wikipedia", "type_str": "table", "num": null }, "TABREF5": { "content": "
SystemClassifier accuracyEntity accuracy
LP (Baseline)95.8790.54
OL+IL+LP(CN)97.4992.81
OL+IL+LP(CN+EN)97.6193.02
-LP (CN+EN)90.3871.38
-OL (CN+EN) -IL (CN+EN)97.46 95.9492.69 88.81
", "html": null, "text": "The evaluation results of different systems As we can see, the full model (i.e. OL+IL+LP (CN+EN)) outperformed the strong baseline LP either on classifier accuracy or entity accuracy, which indicates that our Proceedings of the Twenty-Fourth Conference on Computational Linguistics and Speech Processing (ROCLING 2012)", "type_str": "table", "num": null } } } }