{ "paper_id": "O08-5003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:02:42.345540Z" }, "title": "Question Analysis and Answer Passage Retrieval for Opinion Question Answering Systems", "authors": [ { "first": "Lun-Wei", "middle": [], "last": "Ku", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan University", "location": { "addrLine": "No. 1, Sec. 4, Roosevelt Road", "postCode": "10617", "settlement": "Taipei", "country": "Taiwan" } }, "email": "lwku@nlg.csie.ntu.edu.tw" }, { "first": "Yu-Ting", "middle": [], "last": "Liang", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan University", "location": { "addrLine": "No. 1, Sec. 4, Roosevelt Road", "postCode": "10617", "settlement": "Taipei", "country": "Taiwan" } }, "email": "" }, { "first": "Hsin-Hsi", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Taiwan University", "location": { "addrLine": "No. 1, Sec. 4, Roosevelt Road", "postCode": "10617", "settlement": "Taipei", "country": "Taiwan" } }, "email": "hhchen@csie.ntu.edu.tw" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Question answering systems provide an elegant way for people to access an underlying knowledge base. However, people are interested in not only factual questions, but also opinions. This paper deals with question analysis and answer passage retrieval in opinion QA systems. For question analysis, six opinion question types are defined. A two-layered framework utilizing two question type classifiers is proposed. Algorithms for these two classifiers are described. The performance achieves 87.8% in general question classification and 92.5% in opinion question classification. The question focus is detected to form a query for the information retrieval system and the question polarity is detected to retain relevant sentences which have the same polarity as the question. For answer passage retrieval, three components are introduced. Relevant sentences retrieved are further identified as to whether the focus (Focus Detection) is in a scope of opinion (Opinion Scope Identification) or not, and, if yes, whether the polarity of the scope and the polarity of the question (Polarity Detection) match with each other. The best model achieves an F-measure of 40.59% by adopting partial match for relevance detection at the level of meaningful unit. With relevance issues removed, the F-measure of the best model boosts up to 84.96%.", "pdf_parse": { "paper_id": "O08-5003", "_pdf_hash": "", "abstract": [ { "text": "Question answering systems provide an elegant way for people to access an underlying knowledge base. However, people are interested in not only factual questions, but also opinions. This paper deals with question analysis and answer passage retrieval in opinion QA systems. For question analysis, six opinion question types are defined. A two-layered framework utilizing two question type classifiers is proposed. Algorithms for these two classifiers are described. The performance achieves 87.8% in general question classification and 92.5% in opinion question classification. The question focus is detected to form a query for the information retrieval system and the question polarity is detected to retain relevant sentences which have the same polarity as the question. For answer passage retrieval, three components are introduced. Relevant sentences retrieved are further identified as to whether the focus (Focus Detection) is in a scope of opinion (Opinion Scope Identification) or not, and, if yes, whether the polarity of the scope and the polarity of the question (Polarity Detection) match with each other. The best model achieves an F-measure of 40.59% by adopting partial match for relevance detection at the level of meaningful unit. With relevance issues removed, the F-measure of the best model boosts up to 84.96%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Most of the state-of-the-art Question Answering (QA) systems serve the needs of answering factual questions such as \"When was James Dean born?\" and \"Who won the Nobel Peace Prize in 1991?\" However, in addition to facts, people would also like to know about others' opinions, thoughts, and feelings toward some specific topics, groups, and events. Opinion questions reveal answers about people's opinions (e.g., \"What do Americans think of the US-Iraq war?\" and \"What is the public opinion on human cloning?\") which tend to scatter across different documents. Traditional QA approaches for factual questions are not effective enough to retrieve answers for opinion questions [Stoyanov et al. 2005] , so an opinion QA system is essential.", "cite_spans": [ { "start": 674, "end": 696, "text": "[Stoyanov et al. 2005]", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Most of the research on QA systems has been developed for factual questions, and the association of subjective information with question answering has not yet been studied much. As for subjective information, Wiebe [2000] proposed a method to identify strong clues of subjectivity of adjectives. presented a subjectivity classifier using lists of subjective nouns learned by bootstrapping algorithms. proposed a bootstrapping process to learn linguistically rich extraction patterns for subjective expressions. Kim and Hovy [2004] presented a system to determine word sentiments and combined sentiments within a sentence. Pang, Lee, and Vaithyanathan [2002] classified documents not by topic, but by the overall sentiment, and then determined the polarity of a review. Wiebe et al. [2002] proposed a method for opinion summarization. Wilson et al. [2005] presented a phrase-level sentiment analysis to automatically identify the contextual polarity. Ku et al. [2006] proposed a method to automatically mine and organize opinions from heterogeneous information sources.", "cite_spans": [ { "start": 209, "end": 221, "text": "Wiebe [2000]", "ref_id": "BIBREF16" }, { "start": 511, "end": 530, "text": "Kim and Hovy [2004]", "ref_id": "BIBREF3" }, { "start": 622, "end": 657, "text": "Pang, Lee, and Vaithyanathan [2002]", "ref_id": "BIBREF9" }, { "start": 769, "end": 788, "text": "Wiebe et al. [2002]", "ref_id": "BIBREF17" }, { "start": 834, "end": 854, "text": "Wilson et al. [2005]", "ref_id": "BIBREF20" }, { "start": 950, "end": 966, "text": "Ku et al. [2006]", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Some research has gone from opinion analysis in texts toward that in QA systems. Cardie et al. [2003] took advantage of opinion summarization to support Multi-Perspective Question Answering (MPQA) system which aims to extract opinion-oriented information of a question. Yu and Hatzivassiloglou [2003] separated opinions from facts at both document and sentence levels. They intended to cluster opinion sentences from the same perspective together and summarize them as answers to opinion questions. Kim and Hovy [2005] identified opinion holders, which are frequently asked in opinion questions.", "cite_spans": [ { "start": 81, "end": 101, "text": "Cardie et al. [2003]", "ref_id": "BIBREF0" }, { "start": 270, "end": 300, "text": "Yu and Hatzivassiloglou [2003]", "ref_id": "BIBREF21" }, { "start": 499, "end": 518, "text": "Kim and Hovy [2005]", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "This paper deals with two major problems in opinion QA systems, question analysis and answer passage retrieval. Several issues, including how to separate opinion questions from factual ones, how to define question types for opinion questions, how to correctly classify opinion questions into corresponding types, how to present answers for different types of opinion questions, and how to retrieve answer passages for opinion questions are discussed. In this paper, the unit of a passage is a sentence, though a passage can sometimes refer to a set of sentences, such as a paragraph.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "2. An Opinion QA Framework Figure 1 is a framework of the opinion QA system. The question (Question Q) is initially submitted into a part of speech tagger (POS Tagger), then the question is analyzed in three aspects by two-layered classification (Two-Layered Classification), including the question focus (Q Focus), the question polarity (Q Polarity), and the opinion question type (Opinion Q Type). The question focus defines the main concept of the question, while the question polarity refers to the positive, neutral, or negative tendency of the opinionated question. The former two attributes are further applied in answer passage retrieval (Answer Passage Retrieval). The question focus is the query for an information retrieval (IR) system to retrieve relevant sentences. The question polarity, which is the opinion polarity of the question, is utilized to screen out relevant sentences with different polarities to the question. For example, the polarity of the question \"Who would like to use a Civil ID card?\" should be positive, and non-supportive evidence should not be extracted for further processing. With answer passages retrieved, answer extraction extracts text spans as answers according to the opinion question types, and outputs answers to users. This paper focuses on the retrieved answer passages and the opinion type of the question for answer extraction. Answer extraction is not included in our discussion and is left as a future work. ", "cite_spans": [], "ref_spans": [ { "start": 27, "end": 35, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Question Analysis and Answer Passage Retrieval for 309 Opinion Question Answering Systems", "sec_num": null }, { "text": "The experimental corpus comes from four sources, TREC 1 , NTCIR 2 , the Internet Polls, and OPQ. TREC and NTCIR are two of the three major information retrieval evaluation forums in the world. Their evaluation tracks are in natural language processing and information retrieval domains such as large-scale information retrieval, question answering, genomics, cross language processing, and so on. We collected 500 factual questions from the main task of QA Track in TREC-11. Since the documents for answer extraction are in Chinese, the English questions were translated into Chinese manually for the experiments. A total of 1,577 factual questions are obtained from the developing question set of the CLQA task in NTCIR-5. Questions from public opinion polls in three public media websites -say, China Times, Era, and TVBS, are crawled. OPQ is developed for this research, and it contains both factual and opinion questions. To construct the question corpus OPQ, annotators are given titles and descriptions of six opinion topics selected from NTCIR-2 and NTCIR-3. Annotators freely ask any three factual questions and seven opinion questions for each topic. Duplicated questions are dropped and a total of 1,011 questions are collected. Within these 1,011 questions in OPQ, 304 are factual questions and the other 707 are opinion questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Corpus Preparation", "sec_num": "3." }, { "text": "In total, we collected 2,443 factual questions and 1,289 opinion questions from four different sources. These 3,732 questions, shown in Table 1 , are used for our experiments. There are some challenging issues in extracting answers automatically by opinion QA systems. Opinionated questions are generally related to holders, targets, and opinions. Holders are the named entities who express opinions, while targets are the objects these opinions are related to. Opinions are comments that holders express toward targets. We categorize the challenges in question analysis into three parts: holders, opinions and concepts.", "cite_spans": [], "ref_spans": [ { "start": 136, "end": 143, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Experimental Corpus Preparation", "sec_num": "3." }, { "text": "(a) Challenging issues for holders", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Opinion Question Answering Systems", "sec_num": null }, { "text": "(1) To automatically identify named entities expressing opinions is imperative.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Opinion Question Answering Systems", "sec_num": null }, { "text": "(2) The opinion holders may be a group. For example, answers to the question \"How do Americans feel about the affair of the U.S. president Clinton?\", consist of opinions from any American.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Opinion Question Answering Systems", "sec_num": null }, { "text": "(3) The classification of opinion holders may be necessary. To answer questions like \"What kind of people support the abolishment of the Joint College Entrance Examination?\", QA systems have to find people having opinions toward the examination and classify them into the correct category, such as students, teachers, scholars, parents, and so on.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Opinion Question Answering Systems", "sec_num": null }, { "text": "(b) Challenging issues for opinions (4) Knowing whether questions themselves contain subjective information and deciding their opinion polarities is necessary. The question \"Who disagrees with the idea of surrogate mothers?\" points out a negative attitude, and the answer to this question is expected to be a list of persons or organizations that have negative opinions toward the idea of surrogate mothers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Opinion Question Answering Systems", "sec_num": null }, { "text": "(5) The comparison and the summarization of positive and negative opinions may be required. In the question \"Is using a civil ID card more advantageous or disadvantageous?\", opinions expressing advantages and disadvantages have to be contrasted and scored to represent answers as \"More advantageous\" or \"More disadvantageous\" with evidence listed to users. (c) Challenging issues for concepts (6) It is essential to understand the concepts of opinions and perform the expansion of concepts to extract correct answers. In the question \"Is a civil ID card secure?\" it is vital to know the definition and conditions of being secure. For example, keeping the public's privacy, ensuring the system's security, and protecting fingerprint obtainment are possible security points. 7We may have to expand the concept of target. For instance, in the question \"What do Taiwanese think about the substitute program for the Joint College Entrance Examination?\", the system has to know what the substitute program is, and seek for text spans which hold opinions towards it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Opinion Question Answering Systems", "sec_num": null }, { "text": "Among the 707 opinion questions from OPQ corpus, answers of 160 opinion questions are found in the NTCIR corpus. These 160 opinion questions are analyzed based on the above seven challenges. Table 2 lists the number of questions (#Q) with respect to the number of challenges (#C). Due to the heavy manual effort of annotations, only a total of 60 questions, which include one to three challenges, are selected for further annotation. Sentences are annotated as to whether they are opinions (Opinion), whether they are relevant to the NTCIR topic of the document in which they are (Rel2T), whether they are relevant to the question (Rel2Q), and whether they contain answers (AnswerQ). If sentences are annotated as relevant to the question, annotators further annotate the text spans which contribute answers to the question (CorrectMU). A total of 1,952 sentences are annotated. These documents are relevant to six opinionated topics, including civil ID card, the abolishment of Joint College Entrance Examination, the Chinese-English phonetic transcription system, anti-Meinung Dam construction, hewing down of Chinese junipers in Chilan, and surrogate mother.", "cite_spans": [], "ref_spans": [ { "start": 191, "end": 198, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Opinion Question Answering Systems", "sec_num": null }, { "text": "A two-layered classification, i.e., the first classifier Q-Classifier and the second classifier OPQ-Classifier, is proposed. Q-Classifier separates opinion questions from factual ones, and OPQ-Classifier determines the types of opinion questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Two-Layered Question Classification", "sec_num": "4." }, { "text": "As mentioned, the holder, the target and the opinion expressed are three important factors in an opinion expression. Besides, opinion questions could be asked in the same way as factual questions. Considering these factors and the answer format in both opinion questions themselves and their corresponding answers, we define six opinion question types as follows. Among these types, holder, target, and attitude types are related to the opinionated factors of questions, while reason, majority, and yes/no types concern the answer format of opinionated questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Types of Opinion Questions", "sec_num": "4.1" }, { "text": "(1) Holder (HD) Definition: Asking who the expresser of the specific opinion is.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Types of Opinion Questions", "sec_num": "4.1" }, { "text": "Example: Who supports the civil ID card?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Types of Opinion Questions", "sec_num": "4.1" }, { "text": "Answer: Entities and the corresponding evidence. Answer: Question-related opinions, separated into support, neutral, and non-support categories.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Types of Opinion Questions", "sec_num": "4.1" }, { "text": "(", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Types of Opinion Questions", "sec_num": "4.1" }, { "text": "Definition: Asking the reasons of an explicit or an implicit holder's attitude to a specific target.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4) Reason (RS)", "sec_num": null }, { "text": "Example: Why do people think it better not to have the college entrance exam?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4) Reason (RS)", "sec_num": null }, { "text": "Answer: Reasons for taking the specified stand.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4) Reason (RS)", "sec_num": null }, { "text": "(5) Majority (MJ)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4) Reason (RS)", "sec_num": null }, { "text": "Definition: Asking which option, listed or not listed, is the majority opinion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4) Reason (RS)", "sec_num": null }, { "text": "Example: If the government tries to carry out the use of the civil ID card, will its reputation get better or worse?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4) Reason (RS)", "sec_num": null }, { "text": "Answer: The majority of support, neutral and non-support evidence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4) Reason (RS)", "sec_num": null }, { "text": "(6) Yes/No (YN)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4) Reason (RS)", "sec_num": null }, { "text": "Definition: Asking whether their statements are correct. Questions asking for a binary answer are included.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4) Reason (RS)", "sec_num": null }, { "text": "Example: Was the airplane crash caused by management problems?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4) Reason (RS)", "sec_num": null }, { "text": "Answer: The stronger opinion, i.e. yes or no.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4) Reason (RS)", "sec_num": null }, { "text": "Q-Classifier distinguishes opinion questions from factual ones. We use See5 [Quinlan 2000 ] to train the Q-Classifier. Seven features are employed. The feature pretype (PTY) denotes types in factual QA systems such as SELECTION, YESNO, METHOD, REASON, PERSON, LOCATION, PERSONDEF, DATE, QUANTITY, DEFINITION, OBJECT, and MISC and they are extracted via a conventional QA system [Lin 2004] . For example, the value of pretype in \"Who is Tom Cruise married to?\" is PERSON.", "cite_spans": [ { "start": 76, "end": 89, "text": "[Quinlan 2000", "ref_id": "BIBREF10" }, { "start": 378, "end": 388, "text": "[Lin 2004]", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Q-Classifier", "sec_num": "4.2" }, { "text": "The other six features are operator (OPR), positive (POS), negative (NEG), totalow (TOW), totalscore (TSR), and maxscore (MSR). A public available sentiment dictionary [Ku et al. 2006] , which contains 2,655 positive opinion keywords, 7,767 negative opinion keywords, and 150 opinion operators, is used to tell if there are any positive (negative) opinion keywords and operators in questions. Each opinion keyword has a score expressing the degree of tendency. The feature operator (OPR) includes words of actions for expressing opinions. For example, say, think, and believe can be hints for extracting opinions. A total of 151 operators are manually collected. The features positive (POS) and negative (NEG) denote the numbers of positive opinion words and negative opinion words in one question, respectively. The feature totalow (TOW) is the total number of opinion operators, positive opinion keywords, and negative opinion keywords in a question. The feature totalscore (TSR) is the overall opinion score of the whole question, while the feature maxscore (MSR) is the absolute maximum opinion score of opinion keywords in a question.", "cite_spans": [ { "start": 168, "end": 184, "text": "[Ku et al. 2006]", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Q-Classifier", "sec_num": "4.2" }, { "text": "Section 3 mentions 2,443 factual questions and 1,289 opinion questions are collected from four different sources. To keep the quantities of factual and opinion questions balanced, 1,289 factual questions are randomly selected from the 2,443 questions. Together with 1,289 opinion questions, a total of 2,578 questions are employed in the experiments of question classification. We adopt See5 to generate the decision tree based on different combinations of features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Q-Classifier", "sec_num": "4.2" }, { "text": "With a 10-fold cross-validation, See5 outputs the resulting decision trees for each fold, and a summary with the mean of error rates produced by these 10 folds. Table 3 shows experimental results. The symbol \"only with feature x\" shows the error rate of using one single feature, while \"with all but feature x\" shows the error rate of using all features except the specified feature. The features pretype (PTY) and totalow (TOW) perform best in reducing errors when used alone. Moreover, they cannot be ignored since the error rate increases when they are excluded. The feature totalow shows that if a question contains more opinion keywords, it is more possible that it is an opinion question. After all the features are considered together, the best performance is 87.8%.", "cite_spans": [], "ref_spans": [ { "start": 161, "end": 168, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Q-Classifier", "sec_num": "4.2" }, { "text": "OPQ-Classifier categorizes opinion questions into the corresponding opinion question types. We first examine if there is any specific pattern in the question. If yes, the rule for the pattern is applied. Otherwise, a scoring function is applied.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OPQ-Classifier", "sec_num": "4.3" }, { "text": "The heuristic rules are listed as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Analysis and Answer Passage Retrieval for 315 Opinion Question Answering Systems", "sec_num": null }, { "text": "(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Analysis and Answer Passage Retrieval for 315 Opinion Question Answering Systems", "sec_num": null }, { "text": "The pattern \"A-not-A\": Yes/No", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Analysis and Answer Passage Retrieval for 315 Opinion Question Answering Systems", "sec_num": null }, { "text": "(2) End with question words: Yes/No", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Analysis and Answer Passage Retrieval for 315 Opinion Question Answering Systems", "sec_num": null }, { "text": "(3) \"Who\" + opinion operator: Holder (4) \"Who\" + passive tense: Target A scoring function deals with those questions which cannot be classified by the above patterns. Unigrams, bigrams, and trigrams in training questions are selected as feature candidates. These feature candidates are separated into the topic dependent type and the general type. A topic dependent feature is only meaningful in questions of some topics, while general features may appear in questions of all kinds of topics. If a feature is topic dependent (e.g., human cloning and Clinton), it is dropped from the feature set. Only general features (e.g., is or is not, whether, and reason) are kept. Finally, a set of features is obtained from the training questions. Then the discriminate power of these features is calculated as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Analysis and Answer Passage Retrieval for 315 Opinion Question Answering Systems", "sec_num": null }, { "text": "First, the observation probability of a feature i in the question type j is defined in Formula (1):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Analysis and Answer Passage Retrieval for 315 Opinion Question Answering Systems", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "0 ( , ) ( , ) ( ) NumQ i j P i j NumQ j =", "eq_num": "(1)" } ], "section": "Question Analysis and Answer Passage Retrieval for 315 Opinion Question Answering Systems", "sec_num": null }, { "text": "where i is the index of the feature, j is the index of the question type, and NumQ represents the number of questions. The observation probability shows how often a feature is observed in each type. It is then normalized by Formula (2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Analysis and Answer Passage Retrieval for 315 Opinion Question Answering Systems", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "6 1 ( , ) ( , ) ( , ) o o o j P i j NP i j P i j = = \u2211", "eq_num": "(2)" } ], "section": "Question Analysis and Answer Passage Retrieval for 315 Opinion Question Answering Systems", "sec_num": null }, { "text": "Every feature has six normalized observation probabilities corresponding to the six types. With these probabilities, the score ScoreQ of a question can be calculated by Formula (3):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Analysis and Answer Passage Retrieval for 315 Opinion Question Answering Systems", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1 ( ) ( , ) n o i ScoreQ j NP i j = = \u2211", "eq_num": "(3)" } ], "section": "Question Analysis and Answer Passage Retrieval for 315 Opinion Question Answering Systems", "sec_num": null }, { "text": "where n is the total number of features in question Q, and ScoreQ(j) represents the score of question Q as type j. Since there are six possible opinion question types, the six ScoreQ represent how possible the question Q belongs to each type. These six scores form the feature vector of the question Q for classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Analysis and Answer Passage Retrieval for 315 Opinion Question Answering Systems", "sec_num": null }, { "text": "Training instances are used to find the centroid of each type. The Pearson correlation is adopted as the distance measure. The distances between the testing opinion questions and the six centroids are calculated to assign the opinion questions to the closest type.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Analysis and Answer Passage Retrieval for 315 Opinion Question Answering Systems", "sec_num": null }, { "text": "We use the OPQ corpus in Section 3 for the evaluation of the OPQ-Classifier. The opinion types of these opinion questions are given manually. Among the 707 opinion questions, answers of 160 opinion questions are found in the NTCIR corpus. They are used as the training data for an intensive analysis of both questions and answers. The remaining 547 opinion questions are used as the testing data. The confusion matrix of the OPQ-Classifier is shown in Tables 4 (in numbers) and 5 (in percentages). Each element (i,j) in these matrices shows the number or percentage for questions of type i classified as type j. The accuracy, defined as the number of correctly classified questions over the total number of questions, is 92.5%. There are fewer questions of target (TG) and majority (MJ) types, i.e., 8 and 13 questions in the testing collection, respectively. The unsatisfactory results of these two types, 62.5% in TG and 61.5% in MJ, may be due to the lack of training questions. From the questions collected, we also find that the proportion of YN (binary) questions is significant in opinionated questions. Figure 2 shows the framework of answer passage retrieval in an opinion QA system. The question focus (Q Focus) supplied by the question analysis serves as the input to an Okapi IR system [Reberson et al. 1998 ] to retrieve relevant sentences from the knowledge base. From the relevant sentences, we can tell whether the focus (Focus Detection) is in a scope of opinion text spans or not (Opinion Scope Identification), and, if yes, (Opinion Toward Focus), whether the polarity (Detecting Polarity) of the scope matches the polarity of the question (Same Polarity). The details are discussed in the following sections. ", "cite_spans": [ { "start": 1298, "end": 1319, "text": "[Reberson et al. 1998", "ref_id": null } ], "ref_spans": [ { "start": 1111, "end": 1119, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Question Analysis and Answer Passage Retrieval for 315 Opinion Question Answering Systems", "sec_num": null }, { "text": "The first stage of answer passage retrieval is to input the question focus as a query into an IR system to retrieve relevant sentences from the knowledge base. These retrieved sentences may contain answers for a question. A set of content words in one question is used to represent its focus. The following steps extract a set of content words as the question focus and formulate a query.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Focus Extraction", "sec_num": "5.1" }, { "text": "(1) Remove question marks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Focus Extraction", "sec_num": "5.1" }, { "text": "(2) Remove question words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Focus Extraction", "sec_num": "5.1" }, { "text": "(3) Remove opinion operators.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Focus Extraction", "sec_num": "5.1" }, { "text": "(4) Remove negation words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Focus Extraction", "sec_num": "5.1" }, { "text": "(5) Name the remaining terms as focus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Focus Extraction", "sec_num": "5.1" }, { "text": "(6) Use the Boolean OR operator to form a query.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Focus Extraction", "sec_num": "5.1" }, { "text": "Since question marks and question words are common in every question, they do not contribute to the retrieval of relevant sentences; therefore, they are removed. Opinion operators and negation words are removed as well since they represent the question polarity instead of the question focus. Once we have the question focus, we use the Boolean OR operator rather than the AND operator to form a query. This is because we prefer the IR system to return sentences that have any relevancy to the question.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Focus Extraction", "sec_num": "5.1" }, { "text": "The polarity of the question is useful in opinion QA systems to filter out query-relevant sentences which have different polarities from the question. If the question polarity is positive, the sentences containing answers ought to be positive, and vice versa. The polarity detection algorithm is shown as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Polarity Detection", "sec_num": "5.2" }, { "text": "(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Polarity Detection", "sec_num": "5.2" }, { "text": "Determine the polarity of the opinion operator. 1 is for positive, 0 is for neutral, and -1 is for negative.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Polarity Detection", "sec_num": "5.2" }, { "text": "(2) Negate the polarity of operator if there is any negation word anterior to the operator.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Polarity Detection", "sec_num": "5.2" }, { "text": "(3) Determine the polarity of the question focus. 1 is for positive, 0 is for neutral, and -1 is for negative.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Polarity Detection", "sec_num": "5.2" }, { "text": "(4) If one of the operator polarity and question focus is 0 (neutral), output the sign of the other; else output the sign of the product of the polarities of the opinion operator and the question focus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Polarity Detection", "sec_num": "5.2" }, { "text": "We consider the polarity of the question focus together with the polarity of the opinion operator, because the opinion operator primarily shows the opinion tendency of the question and different polarities of the question focus can affect the polarity of the entire question. A", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Polarity Detection", "sec_num": "5.2" }, { "text": "Opinion Question Answering Systems positive opinion operator stands for a supportive attitude such as \"agree\", \"approve\", and \"support\". A neutral opinion operator stands for a neutral attitude such as \"state\", \"mention\", and \"indicate\". A negative opinion operator stands for a not-supportive attitude such as \"doubt\", \"disapprove\", and \"protest\". In the question \"Who approves of the Joint College Entrance Examination?\", \"approve\" is a positive operator, and \"the Joint College Entrance Examination\" is a neutral question focus. The overall polarity of this question is positive, so the opinion QA system needs to retrieve sentences that express a positive attitude to \"the Joint College Entrance Examination.\" In contrast, in the question \"Who agrees with the abolishment of the Joint College Entrance Examination?\", the question focus \"the abolishment of the Joint College Entrance Examination\" becomes negative because of \"the abolishment\". Even though the operator is positive, opinion QA systems still have to look for sentences that contain negative opinions toward \"the Joint College Entrance Examination.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Analysis and Answer Passage Retrieval for 319", "sec_num": null }, { "text": "In Chinese, a sentence ending with a full stop may be composed of several sentence fragments sf separated by commas or semicolons as follows: \"sf 1 \uff0csf 2 \uff0csf 3 \uff0c\u2026\uff0csf n \u3002\". Chen and Yan [1995] show that about 75% of Chinese sentences contain more than two sentence fragments.", "cite_spans": [ { "start": 172, "end": 191, "text": "Chen and Yan [1995]", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Opinion Scope Identification", "sec_num": "5.3" }, { "text": "An opinion scope denotes a range expressing attitudes in a sentence. It may be a complete sentence, a sentence fragment, or a meaningful unit (MU) based on different criteria. A meaningful unit denotes a complete concept in one sentence. It is very common that many concepts are expressed within one sentence in Chinese documents. Therefore, identifying the complete concept denoted as MU in sentences is necessary for the processing of relevant opinions. As mentioned, a Chinese sentence is composed of several sentence fragments of which one or many can form a meaningful unit, which expresses a complete concept. This paper employs linking elements [Li and Thompson 1981] such as \"because\", \"when\", etc. to compose MUs from a sentence. For example, in S (in Chinese), \"\u56e0\u6b64\" (thus) is a linking element which links sf 2 , sf 3 , and sf 4 together, and sf 2 is a subordinate clause of the operator \"\u8868 \u793a\" (indicate) in sf 1 . Therefore, sf 1 , sf 2 , sf 3 , and sf 4 form a MU in this case. ", "cite_spans": [ { "start": 652, "end": 674, "text": "[Li and Thompson 1981]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Opinion Scope Identification", "sec_num": "5.3" }, { "text": "The IR system takes a sentence as a retrieval unit and reports those sentences that are probably relevant to a given query. The focus detection aims to know which sentence fragments are useful to extract answer passages. Three criteria of focus detection, namely exact match, partial match, and lenient, are considered. In an extreme case (i.e., lenient), all the fragments in a retrieved sentence are regarded as relevant to the question focus. In another extreme case (i.e., exact match), only the fragment containing the complete question focus is regarded as relevant. In other words, exact match filters the fragments without the sentence focus out from the retrieved sentences. Partial match is weaker than exact match and is stronger than the lenient criterion. Those fragments which contain a part of the question focus are regarded as relevant.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Focus Detection", "sec_num": "5.4" }, { "text": "There are three criteria for focus detection and opinion scope identification, respectively; thus, a total of 9 combinations are considered. For example, a combination of exact match and meaningful units means that meaningful units containing at least one focus are extracted for further processing. Similarly, a combination of partial match and sentence fragments indicates that sentence fragments containing at least one partial focus are extracted for further processing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Focus Detection", "sec_num": "5.4" }, { "text": "Given a combination of the above strategies, we have a set of opinion scopes relevant to the specific focus. Polarity detection tries to identify those scopes bearing the same polarity as the question. How to determine the opinion polarity is an important issue. Two approaches are adopted. The opinion word approach employs a sentiment dictionary NTUSD 3 , which contains 2,812 positive words and 8,276 negative words, to detect whether words in this dictionary appear in a certain scope. The score of an opinion scope is the sum of the scores of these words [Ku and Chen 2007] .", "cite_spans": [ { "start": 560, "end": 578, "text": "[Ku and Chen 2007]", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Polarity Detection", "sec_num": "5.5" }, { "text": "People sometimes imply their feelings or beliefs toward a particular target or event by actions. For example, people may not say \"Objection!\" to disagree an event, but they may try to abolish or terminate it as possible as they could. On the contrary, people may not say \"I love it!\" to show their delight with an event, but they may try to fight for it or legalize it. In both circumstances, what people take in action expresses their opinions. Action words are those which indicate a person's willing of doing or not doing some behaviors. For example, carry out, seek, and follow are words showing willingness to do something, and we name these words as do's; substitute, stop, and boycott are words showing unwillingness to do something, and we name these words as don'ts. We manually collect action words from materials other Opinion Question Answering Systems than those used in this paper. A total of 69 action words are collected, including 54 do's and 15 don'ts. In the action word approach, we detect opinions in scopes with the help of do's and don'ts together with a sentiment dictionary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Polarity Detection", "sec_num": "5.5" }, { "text": "The F-measure metric is used for evaluation for the answer passage retrieval. With recall (R) and precision (P), F-measure is defined as 2RP/(R+P). To answer an opinion question, all answer passages have to be retrieved for opinion polarity judgment. Therefore, the conventional evaluation metric that uses the precision and recall at a certain rank, e.g., top 10, may not be suitable for this task. Since all answer passages, sentence fragments and meaningful units which provide correct answers are already annotated in the testing bed, the F-measure metric can be applied without questions. Tables 6 and 7 show the F-measures of answer passage retrieval using the opinion word approach and the action word approach, respectively. In these two approaches, adopting meaningful units as opinion scopes is better than adopting sentences and sentence fragments. Considering both opinion and action words is better than opinion words only. The best F-measure 40.59% is achieved when meaningful units and partial match are used. Although meaningful units are the most reasonable units for opinionated question answering, exact match is better than partial match when using opinion word approach, while it is the opposite when adopting action word approach. This is because the number of opinion words is much greater than the number of action words. Although opinion words are useful in extracting opinion evidence as well as action words, they may bring in noise. Applying exact match is more helpful than applying partial match in the aspect of expelling noise.", "cite_spans": [], "ref_spans": [ { "start": 594, "end": 608, "text": "Tables 6 and 7", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Experiments on Answer Passage Retrieval", "sec_num": "5.6" }, { "text": "The previous experiments were done on sentences reported by the Okapi IR system. These retrieved sentences are not all relevant to the questions. This section will discuss how the relevance affects answer passage retrieval. Recall that the experimental corpus is annotated with Rel2T (relevant or irrelevant to the topic), Rel2Q (relevant or irrelevant to the question), CorrectMU (text spans containing answers to the question).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments on Relevance Effects", "sec_num": "5.7" }, { "text": "Assume meaningful units are taken as the opinion scope. Tables 8 and 9 show how relevance influences the performance of answer passage retrieval using the opinion word and action word approaches, respectively. Rel2T shows the performance of using answer passages relevant to the six topics, that is, the original relevant documents from NTCIR CLIR task. Rel2Q shows the performance of using answer passages relevant to the questions, while CorrectMU shows the performance of using correct opinion fragments, which are relevant to the question focus, to decide opinion polarities. Rel2T is similar to the relevant sentence retrieval, which was shown to be tough in the TREC novelty track (Soboroff and Harman, 2003) . From Rel2T to Rel2Q and CorrectMU, the best strategy for matching the question focus switches from partial match to lenient. This is reasonable, since the contents of Rel2Q and CorrectMU are already relevant to the question focus. In Rel2Q, doing focus detection doesn't benefit or harm much (50.37% vs. 53.06%). It shows that the question focus will appear exactly or partially in the relevant sentences. However, focus detection lowers the performance in CorrectMU (72.84% vs. 84.96%). It tells that the question focus and the correct meaningful units may appear in different positions within the sentence. For example, the first meaningful unit talks about the question focus, Question Analysis and Answer Passage Retrieval for 323 Opinion Question Answering Systems while the third meaningful unit really answers the question but omits the question focus since it is mentioned earlier. From Rel2T to Rel2Q, the F-measure does not increase as much as that from Rel2Q to CorrectMU. This result shows that finding the correct fragments of passages to judge the opinion polarity is very crucial to answer passage retrieval. The F-measure of CorrectMU shows the performance of judging opinion polarities without the relevant issue.", "cite_spans": [ { "start": 687, "end": 714, "text": "(Soboroff and Harman, 2003)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 56, "end": 70, "text": "Tables 8 and 9", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Experiments on Relevance Effects", "sec_num": "5.7" }, { "text": "Using either the opinion word approach or the action word approach achieves an F-measure greater than 80%. As a whole, including action words is better than using opinion words only.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments on Relevance Effects", "sec_num": "5.7" }, { "text": "This paper proposes some important techniques for opinion question answering. For question classification, a two-layered framework including two classifiers is proposed. General questions are divided into factual and opinion questions, and then opinion questions themselves are classified into one of the six opinion question types defined in this paper. With both factual and opinion features for a decision tree model, the classifier achieves a precision rate of 87.8% for general question classification. With heuristic rules and the Pearson correlation coefficient as the distance measurement, the classifier achieves a precision rate of 92.5% for opinion question classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." }, { "text": "For opinion answer passage retrieval, we are concerned not only with the relevance but also with the sentiment. Considering both opinion words and action words is better than considering opinion words only. Taking meaningful units as the opinion scope is better than taking sentences. Under the action word approach, the best model achieves an F-measure of 40.59% using partial match at the level of meaningful unit. With relevance issues removed, the F-measure of the best model boosts up to 84.96%. Although understanding the meaning of the question focus is important for the relevance detection, some foci are quite challenging in the experiments. Query expansion and concept ontology will be explored in the future.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." }, { "text": "http://trec.nist.gov/ 2 http://research.nii.ac.jp/ntcir/index-en.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://nlg18.csie.ntu.edu.tw:8080/opinion/index.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Research of this paper was partially supported by Google Research Award, and National Science Council, Taiwan, under the contract NSC95-2221-E-002-265-MY3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Combining Low-Level and Summary Representations of Opinions for Multi-Perspective Question Answering", "authors": [ { "first": "C", "middle": [], "last": "Cardie", "suffix": "" }, { "first": "J", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "T", "middle": [], "last": "Wilson", "suffix": "" }, { "first": "D", "middle": [], "last": "Litman", "suffix": "" } ], "year": 2003, "venue": "Proceedings of AAAI Spring Symposium Workshop", "volume": "", "issue": "", "pages": "20--27", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cardie, C., J. Wiebe, T. Wilson, and D. Litman, \"Combining Low-Level and Summary Representations of Opinions for Multi-Perspective Question Answering\", In Proceedings of AAAI Spring Symposium Workshop, 2003, pp. 20-27.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Dealing with Very Long Chinese Sentences in a Robust Parsing System", "authors": [ { "first": "H.-H", "middle": [], "last": "Chen", "suffix": "" }, { "first": "S.-J", "middle": [], "last": "Yan", "suffix": "" } ], "year": 1995, "venue": "Proceedings of National Science Council, Part A: Physical Science and Engineering", "volume": "19", "issue": "", "pages": "398--407", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen, H.-H., and S.-J. Yan, \"Dealing with Very Long Chinese Sentences in a Robust Parsing System\", In Proceedings of National Science Council, Part A: Physical Science and Engineering, 19(5), 1995, pp. 398-407.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Determining the Sentiment of Opinions", "authors": [ { "first": "S.-M", "middle": [], "last": "Kim", "suffix": "" }, { "first": "E", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 20 th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1367--1373", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kim, S.-M., and E. Hovy, \"Determining the Sentiment of Opinions\", In Proceedings of the 20 th International Conference on Computational Linguistics, 2004, pp. 1367-1373.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Identifying Opinion Holders for Question Answering in Opinion Texts", "authors": [ { "first": "S-M", "middle": [], "last": "Kim", "suffix": "" }, { "first": "E", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2005, "venue": "Proceedings of AAAI-05 Workshop on Question Answering in Restricted Domains", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kim, S-M., and E. Hovy, \"Identifying Opinion Holders for Question Answering in Opinion Texts\", In Proceedings of AAAI-05 Workshop on Question Answering in Restricted Domains, 2005.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Opinion Extraction, Summarization and Tracking in News and Blog Corpora", "authors": [ { "first": "L.-W", "middle": [], "last": "Ku", "suffix": "" }, { "first": "Y.-T", "middle": [], "last": "Liang", "suffix": "" }, { "first": "H.-H", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2006, "venue": "Proceedings of AAAI-2006 Spring Symposium on Computational Approaches to Analyzing Weblogs", "volume": "", "issue": "", "pages": "100--107", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ku, L.-W., Y.-T. Liang, and H.-H. Chen, \"Opinion Extraction, Summarization and Tracking in News and Blog Corpora\", In Proceedings of AAAI-2006 Spring Symposium on Computational Approaches to Analyzing Weblogs, AAAI Technical Report, 2006, pp. 100-107.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Mining Opinions from the Web: Beyond Relevance Retrieval", "authors": [ { "first": "L.-W", "middle": [], "last": "Ku", "suffix": "" }, { "first": "H.-H", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2007, "venue": "Journal of American Society for Information Science and Technology, Special Issue on Mining Web Resources for Enhancing Information Retrieval", "volume": "58", "issue": "12", "pages": "1838--1850", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ku, L.-W., and H.-H. Chen, \"Mining Opinions from the Web: Beyond Relevance Retrieval\", Journal of American Society for Information Science and Technology, Special Issue on Mining Web Resources for Enhancing Information Retrieval, 58(12), 2007, pp. 1838-1850.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Mandarin Chinese: A Functional Reference Grammar", "authors": [ { "first": "C", "middle": [ "N" ], "last": "Li", "suffix": "" }, { "first": "S", "middle": [ "A" ], "last": "Thompson", "suffix": "" } ], "year": 1981, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, C. N., and S. A. Thompson, Mandarin Chinese: A Functional Reference Grammar, University of California Press, 1981.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A Study on Chinese Open-Domain Question Answering Systems", "authors": [ { "first": "C.-J", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, C.-J., A Study on Chinese Open-Domain Question Answering Systems, Ph. D. Thesis, National Taiwan University, 2004.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Thumbs up? Sentiment Classification Using Machine Learning Techniques", "authors": [ { "first": "B", "middle": [], "last": "Pang", "suffix": "" }, { "first": "L", "middle": [], "last": "Lee", "suffix": "" }, { "first": "S", "middle": [], "last": "Vaithyanathan", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 2002 Conference on EMNLP", "volume": "", "issue": "", "pages": "79--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pang, B., L. Lee, and S. Vaithyanathan, \"Thumbs up? Sentiment Classification Using Machine Learning Techniques\", In Proceedings of the 2002 Conference on EMNLP, 2002, pp. 79-86.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Data Mining Tools See5 and C5.0", "authors": [ { "first": "J", "middle": [ "R" ], "last": "Quinlan", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quinlan, J. R., Data Mining Tools See5 and C5.0. http://www.rulequest.com/see5-info.html, 2000.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Learning Extraction Patterns for Subjective Expressions", "authors": [ { "first": "E", "middle": [], "last": "Riloff", "suffix": "" }, { "first": "J", "middle": [], "last": "Wiebe", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 2003 Conference on EMNLP", "volume": "", "issue": "", "pages": "105--112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Riloff, E., and J. Wiebe, \"Learning Extraction Patterns for Subjective Expressions\", In Proceedings of the 2003 Conference on EMNLP, 2003, pp. 105-112.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Learning Subjective Nouns Using Extraction Pattern Bootstrapping", "authors": [ { "first": "E", "middle": [], "last": "Riloff", "suffix": "" }, { "first": "J", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "T", "middle": [], "last": "Wilson", "suffix": "" } ], "year": 2003, "venue": "Proceedings of Seventh Conference on Natural Language Learning", "volume": "", "issue": "", "pages": "25--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Riloff, E., J. Wiebe, and T. Wilson, \"Learning Subjective Nouns Using Extraction Pattern Bootstrapping\", In Proceedings of Seventh Conference on Natural Language Learning, 2003, pp. 25-32.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Okapi at TREC-7: Automatic Ad Hoc, Filtering, VLC and Interactive", "authors": [ { "first": "S", "middle": [ "E" ], "last": "Robertson", "suffix": "" }, { "first": "S", "middle": [], "last": "Walker", "suffix": "" }, { "first": "M", "middle": [], "last": "Beaulieu", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 7th Text Retrieval Conference", "volume": "", "issue": "", "pages": "253--264", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robertson, S. E., S. Walker, and M. Beaulieu, \"Okapi at TREC-7: Automatic Ad Hoc, Filtering, VLC and Interactive\", In Proceedings of the 7th Text Retrieval Conference, 1998, pp. 253-264.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Overview of the TREC 2003 novelty track", "authors": [ { "first": "I", "middle": [], "last": "Soboroff", "suffix": "" }, { "first": "D", "middle": [], "last": "Harman", "suffix": "" } ], "year": 2003, "venue": "Proceedings of Twelfth Text REtrieval Conference", "volume": "", "issue": "", "pages": "38--53", "other_ids": {}, "num": null, "urls": [], "raw_text": "Soboroff, I., and D. Harman, \"Overview of the TREC 2003 novelty track\", In Proceedings of Twelfth Text REtrieval Conference, National Institute of Standards and Technology, 2003, pp. 38-53.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Multi-Perspective Question Answering Using the OpQA Corpus", "authors": [ { "first": "V", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "C", "middle": [], "last": "Cardie", "suffix": "" }, { "first": "J", "middle": [], "last": "Wiebe", "suffix": "" } ], "year": 2005, "venue": "Proceedings of HLT/EMNLP 2005", "volume": "", "issue": "", "pages": "923--930", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stoyanov, V., C. Cardie, and J. Wiebe, \"Multi-Perspective Question Answering Using the OpQA Corpus\", In Proceedings of HLT/EMNLP 2005, 2005, pp. 923-930.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Learning Subjective Adjectives from Corpora", "authors": [ { "first": "J", "middle": [], "last": "Wiebe", "suffix": "" } ], "year": 2000, "venue": "Proceeding of 17th National Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "735--740", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wiebe, J., \"Learning Subjective Adjectives from Corpora\", In Proceeding of 17th National Conference on Artificial Intelligence, 2000, pp. 735-740.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "2002 NRRC Summer Workshop on Multi-Perspective Question Answering", "authors": [ { "first": "J", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "E", "middle": [], "last": "Breck", "suffix": "" }, { "first": "C", "middle": [], "last": "Buckly", "suffix": "" }, { "first": "C", "middle": [], "last": "Cardie", "suffix": "" }, { "first": "P", "middle": [], "last": "Davis", "suffix": "" }, { "first": "B", "middle": [], "last": "Fraser", "suffix": "" }, { "first": "D", "middle": [], "last": "Litman", "suffix": "" }, { "first": "D", "middle": [], "last": "Pierce", "suffix": "" }, { "first": "E", "middle": [], "last": "Riloff", "suffix": "" }, { "first": "T", "middle": [], "last": "Wilson", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wiebe, J., E. Breck, C. Buckly, C. Cardie, P. Davis, B. Fraser, D. Litman, D. Pierce, E. Riloff, and T. Wilson, \"2002 NRRC Summer Workshop on Multi-Perspective Question Answering\", ARDA NRRC Summer 2002 Workshop.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Question Analysis and Answer Passage Retrieval for 325", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Question Analysis and Answer Passage Retrieval for 325", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Opinion Question Answering Systems", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Opinion Question Answering Systems", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Recognizing Contextual Polarity in Phrase-Level Sentiment Analysis", "authors": [ { "first": "T", "middle": [], "last": "Wilson", "suffix": "" }, { "first": "J", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "P", "middle": [], "last": "Hoffmann", "suffix": "" } ], "year": 2005, "venue": "Proceedings of HLT/EMNLP 2005", "volume": "", "issue": "", "pages": "347--354", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wilson, T., J. Wiebe, and P. Hoffmann, \"Recognizing Contextual Polarity in Phrase-Level Sentiment Analysis\", In Proceedings of HLT/EMNLP 2005, 2005, pp. 347-354.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Towards Answering Opinion Questions: Separating Facts from Opinions and Identifying the Polarity of Opinion Sentences", "authors": [ { "first": "H", "middle": [], "last": "Yu", "suffix": "" }, { "first": "V", "middle": [], "last": "Hatzivassiloglou", "suffix": "" } ], "year": 2003, "venue": "Proceedings of HLT/EMNLP 2003", "volume": "", "issue": "", "pages": "129--136", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu, H., and V. Hatzivassiloglou, \"Towards Answering Opinion Questions: Separating Facts from Opinions and Identifying the Polarity of Opinion Sentences\", In Proceedings of HLT/EMNLP 2003, 2003, pp. 129-136.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "num": null, "text": "An Opinion QA System Framework." }, "FIGREF1": { "uris": null, "type_str": "figure", "num": null, "text": "Target (TG) Definition: Asking whom the holder's attitude is toward. Example: Who does the public think should be responsible for the airplane crash? Answer: Entities and the corresponding evidence. Question Analysis and Answer Passage Retrieval for 313 Opinion Question Answering Systems (3) Attitude (AT) Definition: Asking what the attitude of a holder to a specific target is. Example: How do people feel about the affair of U.S. President Clinton?" }, "FIGREF4": { "uris": null, "type_str": "figure", "num": null, "text": "Answer Passage Retrieval." }, "FIGREF5": { "uris": null, "type_str": "figure", "num": null, "text": ": sf 1 : \u9ec3\u5b97\uf914\u8868\u793a(indicate:operator)\uff0c sf 2 : \u767c\ufa08\u570b\u6c11 IC \u5361\u727d\u6d89\u5230\u57fa\u672c\u4eba\u6b0a\uff0c sf 3 : \u56e0\u6b64(thus:linking element)\uff0c sf 4 : \u5728\u6c7a\u7b56\u904e\u7a0b\u4e0a\u5fc5\u9808\u76f8\u7576\u56b4\u5bc6\uff0c sf 5 : \uf9b5\u5982\u65e5\u672c\u5c31\u672a\u767c\ufa08\u570b\u6c11\u8eab\u4efd\u8b49\u3002 320 Lun-WeiKu et al." }, "TABREF0": { "html": null, "text": "", "num": null, "content": "
CorpusQ typeFactualOpinionTotal
TREC5000500
NTCIR1,57701,577
Polls62582644
OPQ3047071,011
Total2,4431,2893,732
", "type_str": "table" }, "TABREF1": { "html": null, "text": "", "num": null, "content": "
#C1234567Total
#Q1947393013120160
", "type_str": "table" }, "TABREF2": { "html": null, "text": "", "num": null, "content": "
feature xPTY OPR POS NEG TOW TSR MSR ALL
only with feature x19.6 38.5 34.9 35.3 21.9 26.6 29.6 12.2
with all but feature x16.3 12.7 13.7 12.2 14.8 12.4 12.8
", "type_str": "table" }, "TABREF3": { "html": null, "text": "", "num": null, "content": "
NumberHDOpinion question type i TG AT RS MJYN
Classified as type jHD TG AT RS MJ YN27 0 0 1 0 30 5 0 0 0 30 0 68 4 0 151 0 0 17 0 50 0 0 0 8 50 0 0 0 0 385
Total318872313385
Table 5. Confusion Matrix (Percentage).
%HDOpinion question type i TG AT RS MJYN
Classified as type j
", "type_str": "table" }, "TABREF4": { "html": null, "text": "", "num": null, "content": "
Opinion Scope \u2192 Focus Detection \u2193sentencesentence fragmentmeaningful unit
Exact Match32.09%36.06%36.25%
Partial Match27.32%27.46%33.09%
Lenient19.91%19.95%
Opinion Scope \u2192 Focus Detection \u2193sentencesentence fragmentmeaningful unit
Exact Match28.75%30.20%36.36%
Partial Match32.83%35.09%40.59%
Lenient27.15%29.19%32.87%
", "type_str": "table" }, "TABREF5": { "html": null, "text": "", "num": null, "content": "
Rel Degree \u2192 Focus Detection \u2193Rel2TRel2QCorrectMU
Exact Match36.69%36.73%50.43%
Partial Match34.79%47.15%70.15%
Lenient28.03%48.35%80.73%
Table 9. Relevance Effects on Answer Passage Retrieval
Using Action Words.
Rel Degree \u2192 Focus Detection \u2193Rel2TRel2QCorrectMU
Exact Match36.88%36.92%48.99%
Partial Match41.90%50.37%72.84%
Lenient37.04%53.06%84.96%
", "type_str": "table" } } } }