{ "paper_id": "I05-1038", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:26:21.471835Z" }, "title": "Classification of Multiple-Sentence Questions", "authors": [ { "first": "Akihiro", "middle": [], "last": "Tamura", "suffix": "", "affiliation": { "laboratory": "Precision and Intelligence Laboratory", "institution": "Tokyo Institute of Technology", "location": { "country": "Japan" } }, "email": "" }, { "first": "Hiroya", "middle": [], "last": "Takamura", "suffix": "", "affiliation": { "laboratory": "Precision and Intelligence Laboratory", "institution": "Tokyo Institute of Technology", "location": { "country": "Japan" } }, "email": "takamura@pi.titech.ac.jp" }, { "first": "Manabu", "middle": [], "last": "Okumura", "suffix": "", "affiliation": { "laboratory": "Precision and Intelligence Laboratory", "institution": "Tokyo Institute of Technology", "location": { "country": "Japan" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Conventional QA systems cannot answer to the questions composed of two or more sentences. Therefore, we aim to construct a QA system that can answer such multiple-sentence questions. As the first stage, we propose a method for classifying multiple-sentence questions into question types. Specifically, we first extract the core sentence from a given question text. We use the core sentence and its question focus in question classification. The result of experiments shows that the proposed method improves F-measure by 8.8% and accuracy by 4.4%.", "pdf_parse": { "paper_id": "I05-1038", "_pdf_hash": "", "abstract": [ { "text": "Conventional QA systems cannot answer to the questions composed of two or more sentences. Therefore, we aim to construct a QA system that can answer such multiple-sentence questions. As the first stage, we propose a method for classifying multiple-sentence questions into question types. Specifically, we first extract the core sentence from a given question text. We use the core sentence and its question focus in question classification. The result of experiments shows that the proposed method improves F-measure by 8.8% and accuracy by 4.4%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Question-Answering (QA) systems are useful in that QA systems return the answer itself, while most information retrieval systems return documents that may contain the answer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "QA systems have been evaluated at TREC QA-Track 1 in U.S. and QAC (Question & Answering Challenge) 2 in Japan. In these workshops, the inputs to systems are only single-sentence questions, which are defined as the questions composed of one sentence. On the other hand, on the web there are a lot of multiple-sentence questions (e.g., answer bank 3 , AskAnOwner 4 ), which are defined as the questions composed of two or more sentences: For example, \"My computer reboots as soon as it gets started. OS is Windows XP. Is there any homepage that tells why it happens?\". For conventional QA systems, these questions are not expected and existing techniques are not applicable or work poorly to these questions. Therefore, constructing QA systems that can handle multiple-sentence questions is desirable. An usual QA system is composed of three components: question processing, document retrieval, and answer extraction. In question processing, a given question is analyzed, and its question type is determined. This process is called \"question classification\". Depending on the question type, the process in the answer extraction component usually changes. Consequently, the accuracy and the efficiency of answer extraction depend on the accuracy of question classification.", "cite_spans": [ { "start": 99, "end": 100, "text": "2", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Therefore, as a first step towards developing a QA system that can handle multiple-sentence questions, we propose a method for classifying multiplesentence questions. Specifically, in this work, we treat only questions which require one answer. For example, if the question \"The icon to return to desktop has been deleted. Please tell me how to recover it.\" is given, we would like \"WAY\" to be selected as the question type. We thus introduce core sentence extraction component, which extracts the most important sentence for question classification. This is because there are unnecessary sentences for question classification in a multiple-sentence question, and we hope noisy features should be eliminated before question classification with the component. If a multiple-sentence question is given, we first extract the most important sentence for question classification and then classify the question using the only information in the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In Section 2, we present the related work. In Section 3, we explain our proposed method. In Section 4, we describe our experiments and results, where we can confirm the effectiveness of the proposed method. Finally, in Section 5, we describe the summary of this paper and the future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This section presents some existing methods for question classification. The methods are roughly divided into two groups: the ones based on hand-crafted rules and the ones based on machine learning. The system \"SAIQA\" [1] , Xu et al. [2] used hand-crafted rules for question classification. However, methods based on pattern matching have the following two drawbacks: high cost of making rules or patterns by hand and low coverage.", "cite_spans": [ { "start": 218, "end": 221, "text": "[1]", "ref_id": "BIBREF0" }, { "start": 234, "end": 237, "text": "[2]", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Machine learning can be considered to solve these problems. Li et al. [3] used SNoW for question classification. The SNoW is a multi-class classifier that is specifically tailored for learning in the presence of a very large number of features. Zukerman et al. [4] used decision tree. Ittycheriah et al. [5] used maximum entropy. Suzuki [6] used Support Vector Machines (SVMs). Suzuki [6] compared question classification using machine learning methods (decision tree, maximum entropy, SVM) with a rule-based method. The result showed that the accuracy of question classification with SVM is the highest of all. According to Suzuki [6] , a lot of information is needed to improve the accuracy of question classification and SVM is suitable for question classification, because SVM can classify questions with high accuracy even when the dimension of the feature space is large. Moreover, Zhang et al. [7] compared question classification with five machine learning algorithms and showed that SVM outperforms the other four methods as Suzuki [6] showed. Therefore, we also use SVM in classifying questions, as we will explain later.", "cite_spans": [ { "start": 70, "end": 73, "text": "[3]", "ref_id": "BIBREF2" }, { "start": 261, "end": 264, "text": "[4]", "ref_id": "BIBREF3" }, { "start": 304, "end": 307, "text": "[5]", "ref_id": "BIBREF4" }, { "start": 337, "end": 340, "text": "[6]", "ref_id": "BIBREF5" }, { "start": 385, "end": 388, "text": "[6]", "ref_id": "BIBREF5" }, { "start": 632, "end": 635, "text": "[6]", "ref_id": "BIBREF5" }, { "start": 901, "end": 904, "text": "[7]", "ref_id": "BIBREF6" }, { "start": 1041, "end": 1044, "text": "[6]", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "However, please note that we treat not only usual single-sentence questions, but also multiple-sentence questions. Furthermore, our work differs from previous work in that we treat real data on the web, not artificial data prepared for the QA task. From these points, the results in this paper cannot be compared with the ones in the previous work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "This section describes our method for classifying multiple-sentence questions. We first explain the entire flow of our question classification. Figure 1 shows the proposed method. The next process changes depending on whether the given question is a singlesentence question or a multiple-sentence question. If the question consists of a single sentence, the question is sent directly to question classification component. If the question consists of multiple sentences, the question is sent to core sentence extraction component. In the component, a core sentence, which is defined as the most important sentence for question classification, is extracted. Then, the core sentence is sent to the question classification component and the question is classified using the information in the core sentence. In Figure 1 , \"core sentence extraction\" is peculiar to multiple-sentence questions.", "cite_spans": [], "ref_spans": [ { "start": 144, "end": 152, "text": "Figure 1", "ref_id": "FIGREF1" }, { "start": 807, "end": 815, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Two-Step Approach to Multiple-Sentence Question Classification", "sec_num": "3" }, { "text": "When a multiple-sentence question is given, the core sentence of the question is extracted. For example, if the question \"I have studied the US history. Therefore, I am looking for the web page that tells me what day Independence Day is.\" is given, the sentence \"Therefore, I am looking for the web page that tells me what day Independence Day is.\" is extracted as the core sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Core Sentence Extraction", "sec_num": "3.1" }, { "text": "With the core sentence extraction, we can eliminate noisy information before question classification. In the above example, the occurrence of the sentence \"I have studied the US history.\" would be a misleading information in terms of question classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Core Sentence Extraction", "sec_num": "3.1" }, { "text": "Here, we have based our work on the following assumption: a multiplesentence question can be classified using only the core sentence. Please note that we treat only questions which require one answer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Core Sentence Extraction", "sec_num": "3.1" }, { "text": "We explain the method for extracting a core sentence. Suppose we have a classifier, which returns Score(S i ) for each sentence S i of Question. Question is the set of sentences composing a given question. Score(S i ) indicates the likeliness of S i being the core sentence. The sentence with the largest value is selected as the core sentence:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Core Sentence Extraction", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Core sentence = argmax Si\u2208Question Score(S i ).", "eq_num": "(1)" } ], "section": "Core Sentence Extraction", "sec_num": "3.1" }, { "text": "We then extract features for constructing a classifier which returns Score(S i ). We use the information on the words as features. Only the features from the target sentence would not be enough for accurate classification. This issue is exemplified by the following questions (core sentences are underlined).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Core Sentence Extraction", "sec_num": "3.1" }, { "text": "Please advise a medication effective for hay fever. I want to relieve my headache and stuffy nose. Especially my headache is severe. -Question 2:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "-Question 1:", "sec_num": null }, { "text": "I want to relieve my headache and stuffy nose. Especially my headache is severe.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "-Question 1:", "sec_num": null }, { "text": "While the sentence \"I want to relieve my headache and stuffy nose.\" written in bold-faced type is the core sentence in Question 2, the sentence is not suitable as the core sentence in Question 1. These examples show that the target sentence alone is sometimes not a sufficient evidence for core sentence extraction. Thus, in classification of a sentence, we use its preceding and following sentences. For that purpose, we introduce a notion of window size. \"Window size is n\" means \"the preceding n sentences and the following n sentences in addition to the target sentence are used to make a feature vector\". For example, if window size is 0, we use only the target sentence. If window size is \u221e, we use all the sentences in the question.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "-Question 1:", "sec_num": null }, { "text": "We use SVM as a classifier. We regard the functional distance from the separating hyperplane (i.e., the output of the separating function) as Score(S i ). Word unigrams and word bigrams of the target sentence and the sentences in the window are used as features. A word in the target sentence and the same word in the other sentences are regarded as two different features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "-Question 1:", "sec_num": null }, { "text": "As discussed in Section 2, we use SVM in the classification of questions. We use five sets of features: word unigrams, word bigrams, semantic categories of nouns, question focuses, and semantic categories of question focuses. The semantic categories are obtained from a thesaurus (e.g., SHOP, STATION, CITY).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Classification", "sec_num": "3.2" }, { "text": "\"Question focus\" is the word that determines the answer class of the question. The notion of question focus was described by Moldovan et al. [8] . For instance, in the question \"What country is -?\", the question focus is \"country\". In many researches, question focuses are extracted with hand-crafted rules. However, since we treat all kinds of questions including the questions which are not in an interrogative form, such as \"Please teach me -\" and \"I don't know -\", it is difficult to manually create a comprehensive set of rules. Therefore, in this paper, we automatically find the question focus in a core sentence according to the following steps : step 1 find the phrase 5 including the last verb of the sentence or the phrase with \"?\" at the end. step 2 find the phrase that modifies the phrase found in step 1. step 3 output the nouns and the unknown words in the phrase found in step 2.", "cite_spans": [ { "start": 141, "end": 144, "text": "[8]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Question Classification", "sec_num": "3.2" }, { "text": "The output of this procedure is regarded as a question focus. Although this procedure itself is specific to Japanese, we suppose that we can extract question focus for other languages with a similar simple procedure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Classification", "sec_num": "3.2" }, { "text": "We designed experiments to confirm the effectiveness of the proposed method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "In the experiments, we use data in Japanese. We use a package for SVM computation, TinySVM 6 , and a Japanese morphological analyzer, ChaSen 7 for word segmentation of Japanese text. We use CaboCha 8 to obtain dependency relations, when a question focus is extracted from a question. Semantic categories are obtained from a thesaurus \"Goitaikei\" [9] .", "cite_spans": [ { "start": 91, "end": 92, "text": "6", "ref_id": "BIBREF5" }, { "start": 346, "end": 349, "text": "[9]", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We collect questions from two Japanese Q&A sites: hatena 9 and Yahoo!tiebukuro 10 . 2000 questions are extracted from each site and experimental data consist of 4000 questions in total. A Q&A site is the site where a user puts a question on the site and other users answer the question on the site. Such Q&A sites include many multiple-sentence questions in various forms. Therefore, those questions are appropriate for our experiments where non-artificial questions are required.", "cite_spans": [ { "start": 57, "end": 58, "text": "9", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Settings", "sec_num": "4.1" }, { "text": "Here, we manually exclude the following three kinds of questions from the dataset: questions whose answers are only Yes or No, questions which require two or more answers, and questions which are not actually questions. This deletion left us 2376 questions. The question types that we used and their numbers are shown in Table 1 11 . Question types requiring nominal answers are determined referring to the categories used by Sasaki et al. [1] .", "cite_spans": [ { "start": 440, "end": 443, "text": "[1]", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 321, "end": 328, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Experimental Settings", "sec_num": "4.1" }, { "text": "Of the 2376 questions, 818 are single-sentence questions and 1558 are multiple-sentence questions. The average number of sentences in a multiplesentence question is 3.49. Therefore, the task of core sentence extraction in our setting is to decide a core sentence from 3.49 sentences on the average. As an evaluation measure for core sentence extraction, we use accuracy, which is defined as the number of multiple-sentence questions whose core sentence is correctly identified over the number of all the multiple-sentence questions. To calculate the accuracy, correct core sentence of the 2376 questions is manually tagged in the preparation of the experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Settings", "sec_num": "4.1" }, { "text": "As an evaluation measure for question classification, we use F-measure, which is defined as 2 \u00d7 Recall \u00d7 Precision / (Recall + Precision). As another evaluation measure for question classification, we use also accuracy, which is defined as the number of questions whose type is correctly classified over the number of the questions. All experimental results are obtained with two-fold cross-validation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Settings", "sec_num": "4.1" }, { "text": "We conduct experiments of core sentence extraction with four different window sizes (0, 1, 2, and \u221e) and three different feature sets (unigram, bigram, and unigram+bigram). Table 2 shows the result.", "cite_spans": [], "ref_spans": [ { "start": 173, "end": 180, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Core Sentence Extraction", "sec_num": "4.2" }, { "text": "As this result shows, we obtained a high accuracy, more than 90% for this task. The accuracy is so good that we can use this result for the succeeding task of question classification, which is our main target. This result also shows that large widow sizes are better for core sentence extraction. This shows that good clues for core sentence extraction are scattered all over the question. The result in Table 2 also shows that unigram+bigram features are most effective for any window size in core sentence extraction.", "cite_spans": [], "ref_spans": [ { "start": 404, "end": 411, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Core Sentence Extraction", "sec_num": "4.2" }, { "text": "To confirm the validity of our proposed method, we extract core sentences with three simple methodologies, which respectively extract one of the following sentences as the core sentence : (1) the first sentence, (2) the last sentence, and (3) the last interrogative sentence (or the first sentence). Table 3 shows the result. The result shows that such simple methodologies would not work in core sentence extraction.", "cite_spans": [], "ref_spans": [ { "start": 300, "end": 307, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Core Sentence Extraction", "sec_num": "4.2" }, { "text": "We conduct experiments to examine whether the core sentence extraction is effective for question classification or not. For that purpose, we construct the following three models:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Classification: The Effectiveness of Core Sentence Extraction", "sec_num": "4.3" }, { "text": "Plain question. The given question is the input of question classification component without core sentence extraction process. Predicted core sentence. The core sentence extracted by the proposed method in Section 3.1 is the input of question classification component. The accuracy of core sentence extraction process is 90.9% as mentioned in Section 4.2. Correct core sentence. The correct core sentence tagged by hand is the input of question classification component. This case corresponds to the case when the accuracy of core sentence extraction process is 100%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Classification: The Effectiveness of Core Sentence Extraction", "sec_num": "4.3" }, { "text": "Word unigrams, word bigrams, and semantic categories of nouns are used as features. The features concerning question focus cannot be used for the plain question model, because the method for identifying the question focus requires that the input be one sentence. Therefore, in order to clarify the effectiveness of core sentence extraction itself, through fair comparison we do not use question focus for each of the three models in these experiments. Table 4 shows the result. For most question types, the proposed method with a predicted core sentence improves F-measure. This result shows that the core sentence extraction is effective in question classification. We can still expect some more improvement of performance, by boosting accuracy of core sentence extraction.", "cite_spans": [], "ref_spans": [ { "start": 452, "end": 459, "text": "Table 4", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Question Classification: The Effectiveness of Core Sentence Extraction", "sec_num": "4.3" }, { "text": "In order to further clarify the importance of core sentence extraction, we examine the accuracy for the questions whose core sentences are not correctly extracted. Of 142 such questions, 54 questions are correctly classified. In short, the accuracy is 38% and very low. Therefore, we can claim that without accurate core sentence extraction, accurate question classification is quite hard.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Classification: The Effectiveness of Core Sentence Extraction", "sec_num": "4.3" }, { "text": "Here we investigate the effectiveness of each set of features and the influence of the preceding and the following sentences of the core sentence. After that, we conduct concluding experiments. In the first two experiments of this section, we use only the correct core sentence tagged by hand as the input of question classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question Classification: More Detailed Investigation of Features", "sec_num": "4.4" }, { "text": "First, to examine which feature set is effective in question classification, we exclude a feature set one by one from the five feature sets described in Section 3.2 and conduct experiments of question classification. Please note that the five feature sets can be used unlike the last experiment (Table 4) , because the input of question classification is one sentence. Table 5 . Experiments with each feature set being excluded. Here \"sem. noun\" means semantic categories of nouns. \"sem. qf\" means semantic categories of question focuses. Table 5 shows the result. The numbers in parentheses are differences of F-measure compared with its original value. The decrease of F-measure suggests the effectiveness of the excluded feature set.", "cite_spans": [], "ref_spans": [ { "start": 295, "end": 304, "text": "(Table 4)", "ref_id": "TABREF2" }, { "start": 369, "end": 376, "text": "Table 5", "ref_id": null }, { "start": 539, "end": 546, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "The Effectiveness of Each Feature Set", "sec_num": null }, { "text": "We first discuss the difference of F-measure values in Table 5 , by taking PRODUCT and WAY as examples. The F-measure of PRODUCT is much smaller than that of WAY. This difference is due to whether characteristic expressions are present in the type or not. In WAY, words and phrases such as \"method\" and \"How do I -?\" are often used. Such words and phrases work as good clues for classification. However, there is no such characteristic expressions for PRODUCT. Although there is a frequently-used expression \"What is [noun] -?\", this expression is often used also in other types such as LOCATION and FA-CILITY. We have to rely on currently-unavailable world knowledge of whether the noun is a product name or not. This is the reason of the low F-measure for PRODUCT.", "cite_spans": [], "ref_spans": [ { "start": 55, "end": 62, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "The Effectiveness of Each Feature Set", "sec_num": null }, { "text": "We next discuss the difference of effective feature sets according to question types. We again take PRODUCT and WAY as examples. The most effective feature set is semantic categories of nouns for \"PRODUCT\" and bigrams for \"WAY\". Since whether a noun is a product name or not is important for PROD-UCT as discussed before, semantic categories of nouns are crucial to PRODUCT. On the other hand, important clues for WAY are phrases such as \"How do I\". Therefore, bigrams are crucial to WAY. Finally, we discuss the effectiveness of a question focus. The result in Table 5 shows that the F-measure does not change so much even if question focuses or their semantic categories are excluded. This is because both question focuses and their semantic categories are redundantly put in the feature sets. By comparing Tables 4 and 5, we can confirm that question focuses improve question classification performance (F-measure increases from 0.514 to 0.532). Please note again that question focuses are not used in Table 4 for fair comparison.", "cite_spans": [], "ref_spans": [ { "start": 562, "end": 570, "text": "Table 5", "ref_id": null }, { "start": 1006, "end": 1013, "text": "Table 4", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "The Effectiveness of Each Feature Set", "sec_num": null }, { "text": "Next, we clarify the influence of window size. As in core sentence extraction, \"Window size is n\" means that \"the preceding n sentences and the following n sentences in addition to the core sentence are used to make a feature vector\". We construct four models with different window sizes (0, 1, 2, and \u221e) and compare their experimental results. In this experiment, we use five sets of features and correct core sentence as the input of question classification like the last experiment (Table 5) . Table 6 shows the result of the experiment. The result in Table 6 shows that the model with the core sentence alone is best. Therefore, the sentences other than the core sentence are considered to be noisy for classification and would not contain effective information for question classification. This result suggests that the assumption (a multiple-sentence question can be classified using only the core sentence) described in Section 3.1 be correct. ", "cite_spans": [], "ref_spans": [ { "start": 485, "end": 494, "text": "(Table 5)", "ref_id": null }, { "start": 497, "end": 504, "text": "Table 6", "ref_id": "TABREF4" }, { "start": 555, "end": 562, "text": "Table 6", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "The Influence of Window Size", "sec_num": null }, { "text": "We have so far shown that core sentence extraction and question focuses work well for question classification. In this section, we conduct concluding experiments which show that our method significantly improves the classification performance. In the discussion on effective features, we used correct core sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concluding Experiments", "sec_num": null }, { "text": "Here we use predicted core sentences. The result is shown in Table 7 . For comparison, we add to this table the values of F-measure in Table 4 , which correspond to plain question (i.e., without core sentence extraction). The result shows that F-measure of most categories increase, except for FACILITY and DEFINITION. From comparison of \"All\" in Table 5 with Table 7 , the reason of decrease would be the low accuracies of core sentence extraction for these categories. As shown in this table, in conclusion, we obtained 8.8% increase of average F-measure of all and 4.4% increase of accuracy, which is statistically significant in the sign-test with 1% significancelevel.", "cite_spans": [], "ref_spans": [ { "start": 61, "end": 68, "text": "Table 7", "ref_id": "TABREF5" }, { "start": 135, "end": 142, "text": "Table 4", "ref_id": "TABREF2" }, { "start": 347, "end": 354, "text": "Table 5", "ref_id": null }, { "start": 360, "end": 367, "text": "Table 7", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Concluding Experiments", "sec_num": null }, { "text": "Someone may consider that the type of multiple-sentence questions can be identified by \"one-step\" approach without core sentence extraction. In a word, the question type of each sentence in the given multiple-sentence question is first identified by a classifier, and then the type of the sentence for which the classifier outputs the largest score is selected as the type of the given question. The classifier's output indicates the likeliness of being the question type of a given question. Therefore, we compared the proposed model with this model in the preliminary experiment. The accuracy of question classification with the proposed model is 66.1% (1570/2376), and that of the one-step approach is 61.7% (1467/2376). This result shows that our two-step approach is effective for classification of multiple-sentence questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concluding Experiments", "sec_num": null }, { "text": "In this paper, we proposed a method for identifying the types of multiplesentence questions. In our method, the core sentence is first extracted from a given multiple-sentence question and then used for question classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "We obtained accuracy of 90.9% in core sentence extraction and empirically showed that larger window sizes are more effective in core sentence extraction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "We also showed that the extracted core sentences and the question focuses are good for question classification. Core sentence extraction is quite important also in the sense that question focuses could not be introduced without core sentences. With the proposed method, we obtained the 8.8% increase of F-measure and 4.4% increase of accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "Future work includes the following. The question focuses extracted in the proposed method include nouns which might not be appropriate for question classification. Therefore, we regard the improvement on the question focus detection as future work. To construct a QA system that can handle multiple-sentence question, we are also planning to work on the other components: document retrieval, answer extraction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "Phrase here is actually Japanese bunsetsu phrase, which is the smallest meaningful sequence consisting of an independent word and accompanying words.6 http://chasen.org/ \u223c taku/software/TinySVM/ 7 http://chasen.naist.jp/hiki/ChaSen/ 8 http://chasen.org/ \u223c taku/software/cabocha/ 9 http://www.hatena.ne.jp/ 10 http://knowledge.yahoo.co.jp/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Although Sasaki et al.[1] includes ORGANIZATION in question types, ORGA-NIZATION is integrated into OTHERS (NOUN) in our work because the size of ORGANIZATION is small.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "NTT's QA Systems for NTCIR QAC-1. Working Notes", "authors": [ { "first": "Yutaka", "middle": [], "last": "Sasaki", "suffix": "" }, { "first": "Hideki", "middle": [], "last": "Isozaki", "suffix": "" }, { "first": "Tsutomu", "middle": [], "last": "Hirao", "suffix": "" }, { "first": "Koji", "middle": [], "last": "Kokuryou", "suffix": "" }, { "first": "Eisaku", "middle": [], "last": "Maeda", "suffix": "" } ], "year": 2002, "venue": "NTCIR Workshop", "volume": "3", "issue": "", "pages": "63--70", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yutaka Sasaki, Hideki Isozaki, Tsutomu Hirao, Koji Kokuryou, and Eisaku Maeda: NTT's QA Systems for NTCIR QAC-1. Working Notes, NTCIR Workshop 3, Tokyo, pp. 63-70, 2002.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "TREC 2003 QA at BBN: Answering Definitional Questions. TREC 2003", "authors": [ { "first": "Jinxi", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Ana", "middle": [], "last": "Licuanan", "suffix": "" }, { "first": "Ralph", "middle": [ "M" ], "last": "Weischedel", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "98--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jinxi Xu, Ana Licuanan, and Ralph M.Weischedel: TREC 2003 QA at BBN: An- swering Definitional Questions. TREC 2003, pp. 98-106, 2003.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Learning Question Classifiers", "authors": [ { "first": "Xin", "middle": [], "last": "Li", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "556--562", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xin Li and Dan Roth: Learning Question Classifiers. COLING 2002, Taipei, Taiwan, pp. 556-562, 2002.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Using Machine Learning Techniques to Interpret WH-questions. ACL", "authors": [ { "first": "Ingrid", "middle": [], "last": "Zukerman", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Horvitz", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "547--554", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ingrid Zukerman and Eric Horvitz: Using Machine Learning Techniques to Interpret WH-questions. ACL 2001, Toulouse, France, pp. 547-554, 2001.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Question Answering Using Maximum Entropy Components", "authors": [ { "first": "Abraham", "middle": [], "last": "Ittycheriah", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Franz", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Adwait", "middle": [], "last": "Ratnaparkhi", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "33--39", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abraham Ittycheriah, Martin Franz, Wei-Jing Zhu, and Adwait Ratnaparkhi: Ques- tion Answering Using Maximum Entropy Components. NAACL 2001, pp. 33-39, 2001.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Kernels for Structured Data in Natural Language Processing", "authors": [ { "first": "Jun", "middle": [], "last": "Suzuki", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jun Suzuki: Kernels for Structured Data in Natural Language Processing, Doctor Thesis, Nara Institute of Science and Technology, 2005.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Question Classification using Support Vector Machines. SIGIR", "authors": [ { "first": "Dell", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Wee", "middle": [], "last": "Sun Lee", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "26--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dell Zhang and Wee Sun Lee: Question Classification using Support Vector Ma- chines. SIGIR, Toronto, Canada, pp. 26-32, 2003.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Lasso: A Tool for Surfing the Answer Net. TREC-8", "authors": [ { "first": "Dan", "middle": [], "last": "Moldovan", "suffix": "" }, { "first": "Sanda", "middle": [], "last": "Harabagiu", "suffix": "" }, { "first": "Marius", "middle": [], "last": "Pasca", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Goodrum", "suffix": "" }, { "first": "Roxana", "middle": [], "last": "Girju", "suffix": "" }, { "first": "Vasile", "middle": [], "last": "Rus", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "175--184", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Moldovan, Sanda Harabagiu, Marius Pasca, Rada Mihalcea, Richard Goodrum, Roxana Girju, and Vasile Rus: Lasso: A Tool for Surfing the Answer Net. TREC-8, pp. 175-184, 1999.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The Semantic System", "authors": [ { "first": "Satoru", "middle": [], "last": "Ikehara", "suffix": "" }, { "first": "Masahiro", "middle": [], "last": "Miyazaki", "suffix": "" }, { "first": "Satoshi", "middle": [], "last": "Shirai", "suffix": "" }, { "first": "Akio", "middle": [], "last": "Yokoo", "suffix": "" }, { "first": "Hiromi", "middle": [], "last": "Nakaiwa", "suffix": "" }, { "first": "Kentaro", "middle": [], "last": "Ogura", "suffix": "" }, { "first": "Yoshifumi", "middle": [], "last": "Oyama", "suffix": "" }, { "first": "Yoshihiko", "middle": [], "last": "Hayashi", "suffix": "" } ], "year": 1997, "venue": "", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Satoru Ikehara, Masahiro Miyazaki, Satoshi Shirai, Akio Yokoo, Hiromi Nakaiwa, Kentaro Ogura, Yoshifumi Oyama, and Yoshihiko Hayashi, editors: The Semantic System, volume 1 of Goi-Taikei -A Japanese Lexicon. Iwanami Shoten, 1997 (in Japanese).", "links": null } }, "ref_entries": { "FIGREF1": { "uris": null, "num": null, "text": "The entire flow of question classification An input question consisting of possibly multiple sentences is first preprocessed. Parentheses parts are excluded in order to avoid errors in syntactic parsing. The question is divided into sentences by punctuation marks.", "type_str": "figure" }, "TABREF0": { "num": null, "html": null, "text": "The types and the distribution of 2376 questions", "content": "
Nominal AnswerNon-nominal Answer
Question TypeNumber Question TypeNumber
PERSON64 REASON132
PRODUCT238 WAY500
FACILITY139 DEFINITION73
LOCATION393 DESCRIPTION228
TIME108 OPINION173
NUMBER53 OTHERS (TEXT)131
OTHERS (NOUN)144
11391237
TOTAL 2376
", "type_str": "table" }, "TABREF1": { "num": null, "html": null, "text": "Accuracy of core sentence extraction with different window sizes and features", "content": "
Window Size\\ FeaturesUnigramBigramUnigram+Bigram
01350/1558= 0.866 1378/1558= 0.884 1385/1558= 0.889
11357/1558= 0.871 1386/1558= 0.890 1396/1558= 0.896
21364/1558= 0.875 1397/1558= 0.897 1405/1558= 0.902
\u221e1376/1558= 0.883 1407/1558= 0.903 1416/1558= 0.909
Table 3. Accuracy of core sentence extraction with simple methodologies
MethodologyAccuracy
First Sentence743/1558= 0.477
Last Sentence471/1558= 0.302
Interrogative Sentence 1077/1558= 0.691
", "type_str": "table" }, "TABREF2": { "num": null, "html": null, "text": "F-measure and Accuracy of the three models for question classification", "content": "
ModelPlain Question Predicted Core Sentence Correct Core Sentence
Accuracy Of
Core Sentence Extraction-0.9091.000
PERSON0.4620.4340.505
PRODUCT0.3810.4670.480
FACILITY0.5840.5690.586
LOCATION0.7580.7800.824
TIME0.3400.5080.524
NUMBER0.2620.4420.421
OTHERS (NOUN)0.0490.1440.145
REASON0.2800.5390.579
WAY0.7560.7780.798
DEFINITION0.6430.6240.656
DESCRIPTION0.2960.3150.317
OPINION0.5910.6750.659
OTHERS (TEXT)0.0900.1790.186
Average0.4230.4960.514
Accuracy0.6170.6210.652
", "type_str": "table" }, "TABREF4": { "num": null, "html": null, "text": "Experiments with different window sizes", "content": "
Window Size
012\u221e
PERSON0.574 0.5580.5650.570
PRODUCT0.506 0.4490.4410.419
FACILITY0.612 0.6070.5960.578
LOCATION0.832 0.8270.8170.815
TIME0.475 0.3120.2880.302
NUMBER0.442 0.3220.2960.311
OTHERS (NOUN) 0.210 0.1230.1200.050
REASON0.564 0.4860.4720.439
WAY0.817 0.8080.8090.792
DEFINITION0.652 0.658 0.6580.641
DESCRIPTION0.355 0.358 0.3570.340
OPINION0.696 0.6700.6580.635
OTHERS (TEXT) 0.183 0.1400.1290.133
Average0.532 0.4860.4770.463
Accuracy0.674 0.6560.6580.653
", "type_str": "table" }, "TABREF5": { "num": null, "html": null, "text": "The result of concluding experiments", "content": "
Plain Question The Proposed Method
core sentence extractionNoYes
feature setsunigram, bigram unigram,bigram,qf
sem. nounsem. noun,sem. qf
PERSON0.4620.492
PRODUCT0.3810.504
FACILITY0.5840.575
LOCATION0.7580.792
TIME0.3400.495
NUMBER0.2620.456
OTHERS (NOUN)0.0490.189
REASON0.2800.537
WAY0.7560.789
DEFINITION0.6430.626
DESCRIPTION0.2960.321
OPINION0.5910.677
OTHERS (TEXT)0.0900.189
Average0.4230.511
Accuracy0.6170.661
", "type_str": "table" } } } }