{ "paper_id": "O04-1020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:00:13.867436Z" }, "title": "Using the Web as Corpus for Un-supervised Learning in Question Answering", "authors": [ { "first": "Yi-Chia", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Chiao Tung University", "location": { "country": "Taiwan, R.O.C" } }, "email": "" }, { "first": "Jian-Cheng", "middle": [], "last": "Wu", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": { "country": "Taiwan, R.O.C" } }, "email": "" }, { "first": "Tyne", "middle": [], "last": "Liang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Chiao Tung University", "location": { "country": "Taiwan, R.O.C" } }, "email": "" }, { "first": "Jason", "middle": [ "S" ], "last": "Chang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Tsing Hua University", "location": { "country": "Taiwan, R.O.C" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper we propose a method for unsupervised learning of relation between terms in questions and answer passages by using the Web as corpus. The method involves automatic acquisition of relevant answer passages from the Web for a set of questions and answers, as well as alignment of wh-phrases and keywords in questions with phrases in the answer passages. At run time, wh-phrases and keywords are transformed to a sequence of expanded query terms in order to bias the underlying search engine to give higher rank to relevant passages. Evaluation on a set of questions shows that our prototype improves the performance of a question answering system by increasing the precision rate of top ranking passages returned by the search engine.", "pdf_parse": { "paper_id": "O04-1020", "_pdf_hash": "", "abstract": [ { "text": "In this paper we propose a method for unsupervised learning of relation between terms in questions and answer passages by using the Web as corpus. The method involves automatic acquisition of relevant answer passages from the Web for a set of questions and answers, as well as alignment of wh-phrases and keywords in questions with phrases in the answer passages. At run time, wh-phrases and keywords are transformed to a sequence of expanded query terms in order to bias the underlying search engine to give higher rank to relevant passages. Evaluation on a set of questions shows that our prototype improves the performance of a question answering system by increasing the precision rate of top ranking passages returned by the search engine.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "It was noted that people have submitted longer and longer queries to the Web search engines. Recently, users have started to submit natural language queries instead of a list of keywords. It has encouraged many researchers to develop question answering systems which specifically aim at natural language questions, such as AskJeeves (www.ask.com) and START (www.ai.mit.edu/projects/infolab/).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "For typical question answering systems, document/passage retrieval is the most significant subtask. In this step, the QA system breaks a natural language question into a set of keywords, uses keywords to query a search engine, and returns documents or messages that are related to the queries for further processing. However, the keywords in questions usually are not very effective in retrieving relevant passages. Consider the question \"Who invented glasses with two foci?\" Typically, we will send the keywords \"invented glasses two foci\" to a search engine to retrieve documents or passages. Submitting such keywords to AltaVista, we got irrelevant information about astronomy or physics rather than the inventor \"Benjamin Franklin\" of bifocal glasses. Intuitively, if we include the phrase \"inventor of\" or \"bifocal\" in the query sent to the search engine (SE), we are likely to retrieve passages with the answer. We present the system Atlas (Automatic Transform Learning by Aligning Sentences of question and answer), which automatically learns the transforms from wh -phrases and keywords to n-grams in relevant passages by using the Web as corpus. The transformed query should be more likely to retrieve passages that contain the answer. For instance, consider the natural language question \"Who invented the light bulb?\" Using the keywords in the question directly, we end up with the keyword query, \"invented light bulb,\" for a search engine such as Google. We observed that such a query has room for improvement in terms of bringing in more instances of the relevant answer. Our experiment indicates that the proposed method will determine the best transforms for the wh-phrase \"who invented\" including \"inventor of\", \"was invented\", and \"invented by\". On the other hand, the best transforms discovered for the keyword \"bulb\" include \"light bulb\" and \"electric light.\" Intuitively, these transforms used together will convert the question into an expanded query for Google, \"(\"was invented\" || \"invented by\") (\"electric light\" || \"light bulb\")\" which is more effective in retrieving relevant sentences in the top ranking summaries returned by the search engine, such as \"The light bulb was invented by an illuminated scientist called Thomas Edison in 1879!\". One indicator of effective query is the precision rate at R document retrieved (P R ), the percentage of first R top ranking Web pages (or summaries) which contain the answer. Another indicator is the mean reciprocal rank (MRR) of the first relevant document (or summary) returned. If the r-th document (summary) returned is the first one with the answer then the reciprocal rank is 1/r. Our goal in this study is exploration of methods that will automatically learn the transforms that convert natural language questions to queries with high average P R or MRR.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The rest of the paper is organized as follows. In Section 2, we survey the related work. In Section 3, we describe our method for unsupervised learning of transforms for question and answer pairs w hich are automatically acquired from the Web and how we use the aligned result for effective query expansion in the QA system. The experiment and evaluation results are given in Section 4. In the last section, we conclude with discussion and future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Extensive work on question answering has been reported in the many literature (Buchholz et al., 2001; Harabagiu et al., 2001; John et al., 2002; Shen et al., 2003) . In this study, we focus on learning the transforms that can be used to convert questions into effective queries in order to retrieve re levant passages. Hovy et al. (2000) utilized hypernyms and synonyms in WordNet to expand queries for increasing recall. However, blindly expanding a word to its synonyms sometimes causes undesirable effects. As for hypernyms, it is difficult to determine how many hypernyms a word should be expanded. In contrast to this approach, our method learns query transforms specific to a word or phrase based on real-life questions and answer passages.", "cite_spans": [ { "start": 78, "end": 101, "text": "(Buchholz et al., 2001;", "ref_id": "BIBREF1" }, { "start": 102, "end": 125, "text": "Harabagiu et al., 2001;", "ref_id": null }, { "start": 126, "end": 144, "text": "John et al., 2002;", "ref_id": "BIBREF5" }, { "start": 145, "end": 163, "text": "Shen et al., 2003)", "ref_id": null }, { "start": 319, "end": 337, "text": "Hovy et al. (2000)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "In a recent study most closely related to our method, Agichtein et al. (2004) described the Tritus system that learns transforms of wh-phrases such as \"what is\" to \"refers to\" by using FAQ data automatically. Our method learns transforms for wh-phrases as well as keywords from the web. Tritus system uses heuristic rules and thresholds for term and document frequency to learn transforms, while we rely on a mathematical model method for statistical machine translation. Shen, Lin and Chen (2003) proposed a method that is similar to the Tritus system for the why question.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "Recently, Echihabi and Marcu (2003) presented a noisy channel approach to question answering. Their method also involves collecting answer passages from the web and aligning words across a question and relevant answer passages. However, they require full parsing of the sentences and complicated decision of making a \"cut\" in the parse tree to determine whether to align word, syntactic, or semantic categories. Our simple method is also based on alignment but it does not require full parsing and perform alignment at the surface levels of words and n-grams.", "cite_spans": [ { "start": 10, "end": 35, "text": "Echihabi and Marcu (2003)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "In contrast to previous work on query expansion for question answering, we propose a method that learns query transforms for all phrases in a natural language question automatically on the Web.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "In this section, we present an unsupervised method for QA which automatically learns transforms from wh-phrases and keywords to answer n-grams by using the Web as corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method for Learning Question to Query Transforms", "sec_num": "3." }, { "text": "Given a set of natural language questions Qs and answer terms As, we obtain a collection of passages that contain the answer A to the question Q via some search engine SE. From the collection of answer passages APs, our goal is to discover a set of transforms T that can be applied to wh-phrases and keywords in Q in the hope that the transformed queries are more effective in retrieving passages containing A.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem Statement", "sec_num": "3.1" }, { "text": "This subsection illustrates the procedure for learning transforms T from wh -phrases and unigrams in Q into bigrams in AP. The reason why we decide to use bigrams in AP is that bigram contains more information than unigram and is more effective in retrieving relevant passages . On the other hand, we break Qs into unigrams following the standard approach in IR. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure for Learning Transforms", "sec_num": "3.2" }, { "text": "In the first step of the learning process (see Figure 1 ), we retrieve a set of (Q, A, AP) pairs from the Web for training purpose where Q stands for a natural language question, and AP is a passage containing keywords in Q and the answer term A. The data gathering process is described as follows: , k 2 , \u2026, k n , A) Islamabad is the capital of Pakistan. Current time, \u2026 capital, Pakistan, Islamabad \u2026the airport which serves Pakistan's capital Islamabad, \u2026", "cite_spans": [], "ref_spans": [ { "start": 47, "end": 55, "text": "Figure 1", "ref_id": null }, { "start": 299, "end": 318, "text": ", k 2 , \u2026, k n , A)", "ref_id": null } ], "eq_spans": [], "section": "Collecting Training Material from the Web", "sec_num": "3.2.1" }, { "text": "In the second step, we produce a set of high frequency phrases that characterize different question categories. We follow the method proposed by Agichtein et al. (2004) . The method simply involves computing the frequency of all n-grams in Qs and filters out those with small counts. We will treat the wh-phrases (QPs) as a token in the subsequent steps. However, we differ from their approach in that we are not limited to n-grams of function words. For instance, we derived \"in what year\", \"who wrote\", etc. More examples of wh-phrases are listed in Table 2 . Table 2 . An example of wh-phrases that are used", "cite_spans": [ { "start": 145, "end": 168, "text": "Agichtein et al. (2004)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 552, "end": 559, "text": "Table 2", "ref_id": null }, { "start": 562, "end": 569, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Selecting Frequent Wh-phrases", "sec_num": "3.2.2" }, { "text": "\"what is the\", \"in what year\", \"what was\", \u2026 Who \"who was the\", \"who wrote\", \u2026 Which \"which country\", \"with which\", \u2026 \u2026 \u2026", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wh-words Wh-phrases QPs What", "sec_num": null }, { "text": "In the third step, we use word alignment techniques originally developed for statistical machine translation to find out relation between wh-phrases or keywords in Q and n-grams in AP. We use the Competitive Linking Algorithm proposed by Melamed (1997) to align (Q, AP) pair. We proceed as follows:", "cite_spans": [ { "start": 238, "end": 252, "text": "Melamed (1997)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Learning Question to Query Transforms", "sec_num": "3.2.3" }, { "text": "1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Question to Query Transforms", "sec_num": "3.2.3" }, { "text": "Perform Part of Speech (POS) tagging on both Q and AP in the collection. (See Table 3 and 4) 2.", "cite_spans": [], "ref_spans": [ { "start": 78, "end": 85, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Learning Question to Query Transforms", "sec_num": "3.2.3" }, { "text": "Replace all instances of A with the tag in AP. For example, the answer \"Islamabad\" in AP for the question \"What is the capital of Pakistan?\" is replaced with . (See Table 4 .) The purpose of is to avoid data sparseness while counting bigrams in the following step.", "cite_spans": [], "ref_spans": [ { "start": 176, "end": 183, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Learning Question to Query Transforms", "sec_num": "3.2.3" }, { "text": "Segment Q into unigrams or QPs and eliminate unigrams with low counts. We denote the remaining unigrams as q 1 , q 2 , ..., q n . (See Table 5 ) 4.", "cite_spans": [], "ref_spans": [ { "start": 135, "end": 142, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "Segment AP into bigrams and eliminate bigrams with small term frequency (tf) or very large document frequency(df). We denote the remaining bigrams a 1 , a 2 , ..., a m . (See Table 6 ) 5.", "cite_spans": [], "ref_spans": [ { "start": 175, "end": 182, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "For all i, j, calculate log likelihood ratio (LLR) of q i and a j . (See Table 7 ) 6.", "cite_spans": [], "ref_spans": [ { "start": 73, "end": 80, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "Eliminate candidates with a LLR value lower than 7.88. (See Table 8 ) 7.", "cite_spans": [], "ref_spans": [ { "start": 60, "end": 67, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "Sort list of (q i , a j ) by decreasing LLR value. (See Table 8 ) 8.", "cite_spans": [], "ref_spans": [ { "start": 56, "end": 63, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "Go down the list and select a pair if it does not conflict with previous selection. 9.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "Stop when running out of pairs in the list. 10. Produce the list of aligned pairs for all Qs and APs. N bigrams, a 1 , a 2 , . .., a r , for every wh-phrase or unigram q i in alignment pairs. (See Table 9 ) ", "cite_spans": [], "ref_spans": [ { "start": 102, "end": 126, "text": "N bigrams, a 1 , a 2 , .", "ref_id": null }, { "start": 197, "end": 204, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "At run time, Q is broken into wh-phrases and keywords which are converted to a sequence of query terms according to transforms based on the alignment results described in Section 3.2 in order to give higher ranks to passages that contain the answer for specific SE. See Table 10 for example of the conversion process of the question \"Who invented light bulb?\" ", "cite_spans": [], "ref_spans": [ { "start": 270, "end": 278, "text": "Table 10", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Runtime Transformation of Questions", "sec_num": "3.3" }, { "text": "Boolean query: ((was invented)OR(invented by))AND((electric light)OR(light bulb)) Equivalent Google query: (\"was invented\" || \"invented by\") (\"electric light\" || \"light bulb\")", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Expanded query", "sec_num": null }, { "text": "Our data training data set were collected from http://www.quiz-zone.co.uk.We use 3,581 distinct (Q, A) pairs for automatically retrieving AP from the search engine Google. For each Q, top 100 summaries returned by Google are downloaded. See Table 11 for details of the training corpus. ", "cite_spans": [], "ref_spans": [ { "start": 241, "end": 249, "text": "Table 11", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Training Data Set", "sec_num": "4.1" }, { "text": "We choose the top 2 (N=2) bigrams for each QP or keyword in alignment results. Table 12 lists examples of QP or keyword and its two corresponding transformed bigrams. ", "cite_spans": [], "ref_spans": [ { "start": 79, "end": 87, "text": "Table 12", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Alignment Results", "sec_num": "4.2" }, { "text": "We used a test set of ten questions which are set aside from the training corpus. Table 13 shows the keyword queries and the expanded queries based on the transforms learned from the Web. We evaluated the expanded query by the mean reciprocal rank (MRR) and the precision rate at ten summaries returned by Google. For comparison, we also evaluated Google without applying query transforms. During experiment, the ten batches of returned summaries for the ten questions were evaluated by two human judges. As we can see in Table 14 , using keywords from the natural language questions directly to query Google resulted in an MMR value of 0.48. However, when using expanded queries provided by the Atlas system, we had an MMR of 0.70, a statistically significant improvement. The average precision rate was improved slightly from 40% to 47%. The experimental results show that the Atlas system used in conjunction with the search engine Google outperforms the underlying search engine itself. (\"capital +of\" || \"capital city\") Pakistan What became the 50th state of the America? became 50th state America (\"+to become\" || \"leader +of\") \"50th state\" \"United State\" Who had a hit in 1994 with \"Zombie\"? hit 1994 \"Zombie\"", "cite_spans": [], "ref_spans": [ { "start": 82, "end": 90, "text": "Table 13", "ref_id": "TABREF5" }, { "start": 522, "end": 530, "text": "Table 14", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "E valuation Results", "sec_num": "4.3" }, { "text": "(\"number one\" || \"hit +in\") 1994 \"Zombie\" In which year did Coronation Street begin? year Coronation Street begin (\"was found\" ||\"was born\") Coronation Street (\"began +in\" || \"began on\") In \"The Simpsons\", what is the name of Ned Flanders wife? \"The Simpsons\" name Ned Flanders wife \"The Simpsons\" (\"name +is\" || \"name +of\") Ned Flanders wife In mythology, who supported the heavens on his shoulders? Mythology supported heavens shoulders \"+in Greek\" \"+of +his\" supported heavens shoulders Which Saint's day is on March 1st? Saint day March 1st \"+is +a\" Saint \"St\" March 1st", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "E valuation Results", "sec_num": "4.3" }, { "text": "What is the largest city in Switzerland?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "E valuation Results", "sec_num": "4.3" }, { "text": "largest city Switzerland (\"largest country\" || \"second largest\") Switzerland Who directed the Oscar-winning film \"The English Patient\"? directed Oscar-winning film \"The English Patient\" (\"directed +by\" || \"+and directed\") Oscar-winning film \"The English Patient\" Which country was once ruled by Tsars? country once ruled Tsars (\"country +is\" || \"country +in\") \"ruled +by\" Tsars ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "E valuation Results", "sec_num": "4.3" }, { "text": "We show that our method clearly provide means for learning transformation from a natural language question to a query by applying statistical word alignment technique. The method involves automatically acquiring relevant passages from the Web for a set of questions and answers, aligning phases across from questions to answer passages in order to create phrase transforms that involve wh-words as well as content words. Evaluation on a set of questions shows that our prototype in conjunction with a search engine outperforms the underlying search engine used alone.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5." }, { "text": "Many future directions present themselves. For example, the patterns learned from answer passages acquired on the Web can be extended to include longer and more effective n-grams to further booster the MMR value or average precision rate. Additionally, an interesting direction to explore is creating phrase transforms that contain the answer extraction patterns. These answer extraction patterns can be learned for different types of answers. Yet another direction of research would be to provide confidence factors for ranking the likelihood of many candidate answers extracted using patterns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5." }, { "text": "In summary, we have introduced a method for learning query transforms that improves the ability to retrieve passages with answers using the Web as corpus. The method involves finding query transformations based on techniques borrowed from training a noisy channel in machine translation study. We have implemented and thoroughly evaluated the method as applied to a set of more than 4,000 questions. We have shown that the method can be used with a search engine as an effective component in a question answering system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "5." } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A Noisy-Channel Approach to Question Answering", "authors": [ { "first": "Abdessamad", "middle": [], "last": "Echihabi", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "16--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abdessamad Echihabi, Daniel Marcu. A Noisy-Channel Approach to Question Answering. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pp.16-23, July 2003.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Using grammatical relations, answer frequencies and the World Wide Web for question answering", "authors": [ { "first": "Sabine", "middle": [], "last": "Buchholz", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the Tenth Text REtrieval Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Buchholz, Sabine. Using grammatical relations, answer frequencies and the World Wide Web for question answering. In Proceedings of the Tenth Text REtrieval Conference (TREC 2001).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Learning to find answers to questions on the Web", "authors": [ { "first": "Eugene", "middle": [], "last": "Agichtein", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Lawrence", "suffix": "" }, { "first": "", "middle": [], "last": "Uis Gravano", "suffix": "" } ], "year": 2004, "venue": "In ACM Transactions on Internet Technology (TOIT)", "volume": "4", "issue": "2", "pages": "129--162", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eugene Agichtein, Steve Lawrence, L uis Gravano. Learning to find answers to questions on the Web. In ACM Transactions on Internet Technology (TOIT), Volume 4, Issue 2, pp.129-162, 2004.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "FALCON: Boosting Knowledge for Answer Engines", "authors": [ { "first": "S", "middle": [], "last": "Harabagiu", "suffix": "" }, { "first": "D", "middle": [], "last": "Moldovan", "suffix": "" }, { "first": "M", "middle": [], "last": "Pasca", "suffix": "" }, { "first": "R", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "M", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "R", "middle": [], "last": "Buneascu", "suffix": "" }, { "first": "R", "middle": [], "last": "G\u00eerju", "suffix": "" }, { "first": "V", "middle": [], "last": "Rus", "suffix": "" }, { "first": "P", "middle": [], "last": "Morarescu", "suffix": "" } ], "year": null, "venue": "Proceedings of the 9th Text Retrieval Conference (TREC-9", "volume": "", "issue": "", "pages": "479--488", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harabagiu, S., D. Moldovan, M. Pasca, R. Mihalcea, M. Surdeanu, R. Buneascu, R. G\u00eerju, V. Rus and P. Morarescu. FALCON: Boosting Knowledge for Answer Engines. In Proceedings of the 9th Text Retrieval Conference (TREC-9), pp.479-488.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Question answering in Webclopedia", "authors": [ { "first": "E", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "L", "middle": [], "last": "Gerber", "suffix": "" }, { "first": "U", "middle": [], "last": "Hermjakob", "suffix": "" }, { "first": "M", "middle": [], "last": "Junk", "suffix": "" }, { "first": "C", "middle": [ "Y" ], "last": "Lin", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the TREC-9 Question Answering Track", "volume": "", "issue": "", "pages": "655--672", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hovy, E., Gerber, L., Hermjakob, U., Junk, M., and Lin, CY. Question answering in Webclopedia. In Proceedings of the TREC-9 Question Answering Track , pp.655-672, 2000.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Use of WordNet Hypernyms for Answering What-Is Question", "authors": [ { "first": "John", "middle": [ "M" ], "last": "Prager", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Chu-Ca Rroll", "suffix": "" }, { "first": "Krysztof", "middle": [], "last": "Czuba", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the TREC-2002 Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John M. Prager, Jennifer Chu-Ca rroll, Krysztof Czuba. Use of WordNet Hypernyms for Answering What-Is Question. In Proceedings of the TREC-2002 Conference (TREC 2002).", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A Word-to-Word Model of Translational Equivalence", "authors": [ { "first": "I", "middle": [], "last": "Melamed", "suffix": "" }, { "first": "", "middle": [], "last": "Dan", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the 35st Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "211--229", "other_ids": {}, "num": null, "urls": [], "raw_text": "Melamed, I. Dan. A Word-to-Word Model of Translational Equivalence. In Proceedings of the 35st Annual Meeting of the Association for Computational Linguistics, pp.490-497, 1997. [8] , , . Why . In Proceedings of Rocling 2003, pp.211-229, 2003.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "type_str": "figure", "text": "Automatically collect pairs of Q and AP from the Web for training. (Section 3.2.1) (2) Select frequent wh-phrases. (Section 3.2.2) (3) Apply the alignment technique to the collected material. (Section 3.2.3) Procedure for learning transforms" }, "TABREF0": { "type_str": "table", "text": "1. For each (Q, A) pair in the given collection, we extract keywords K of Q, say, k 1 , k 2 , \u2026 , k n . 2. Submit (k 1 , k 2 , \u2026 , k n , A), as a query to SE. 3. Download the top M summaries that are returned by SE. 4. Retain only those summaries containing A. SeeTable 1for details.", "html": null, "content": "
Table 1. An example of converting a question (Q) with its answer (A)
to SE query and retrieving answer passages (AP)
(Q, A)AP
What is the capital of Pakistan? Answer:( Islamabad)Bungalow For Rent in Islamabad, Capital Pakistan. Beautiful Big House For \u2026
(k 1
", "num": null }, "TABREF1": { "type_str": "table", "text": "Part of Speech of Q Wh-phrases and unigrams in Q The entries in the shaded area are eliminated for their low counts Combination of q i and a j Examples of transforms selected from alignment results for N=3", "html": null, "content": "
Q wordLemmaPositionPOS
What is thewhat be the1*
capitalcapital2nn
ofof3in
PakistanPakistan4np
??5.
Table 4. Part of Speech of AP
AP wordLemmaPositionPOS
Mostmost1rbt
ofof2in
PakistanPakistan3np
rainfallrainfall4nn
isbe5bez
scarcescarce6jj
..7.
Islamabad<ANS>8np
,,9,
thethe10at
capitalcapital11nn
ofof12in
PakistanPakistan13np
sincesince14in
1963196315cd
,,16,
andand17cc
RawalpindiRawalpindi18np
,,19,
arebe20ber
bothboth21abx
locatedlocate22vbn
onon23rp
thethe24at
PothowarPothowar25np
PlainPlain26nn
", "num": null }, "TABREF2": { "type_str": "table", "text": "An example of transformation from question into query", "html": null, "content": "
Question
Who invented light bulb?
Wh-phraseKeywords
Who inventedlightbulb
Transform wh-phrase and keywords
was inventedelectric lightelectric light
invented bylight bulblight bulb
", "num": null }, "TABREF3": { "type_str": "table", "text": "The training corpus", "html": null, "content": "
Training data setDistinct (Q, A)Distinct (Q, AP)
Quiz-Zone3,58199,697
", "num": null }, "TABREF4": { "type_str": "table", "text": "Parts of alignment results", "html": null, "content": "
QP or Keyword in QBigram in APAlignment count
inventbe invent175
inventinvent by43
who wrotebe bear94
who wrotehe write87
capitalcapital of545
capitalcapital city241
", "num": null }, "TABREF5": { "type_str": "table", "text": "Test questions", "html": null, "content": "
QKeyword query forExpanded query for Google
Google (GO)(AT+GO)
What is the capital of Pakistan? capital Pakistan
", "num": null }, "TABREF6": { "type_str": "table", "text": "Evaluation results", "html": null, "content": "
PerformancesMRRPrecision (%)
AT+GO
(Atlas expanded query for Google)0.7047
GO
(Direct keyword query for Google)0.4840
", "num": null } } } }