{ "paper_id": "I05-1037", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:25:19.463575Z" }, "title": "A Preliminary Work on Classifying Time Granularities of Temporal Questions", "authors": [ { "first": "Wei", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Hong Kong Polytechnic University", "location": { "addrLine": "Hung Hom", "settlement": "Hong Kong" } }, "email": "" }, { "first": "Wenjie", "middle": [], "last": "Li", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Qin", "middle": [], "last": "Lu", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Hong Kong Polytechnic University", "location": { "addrLine": "Hung Hom", "settlement": "Hong Kong" } }, "email": "" }, { "first": "Kam-Fai", "middle": [], "last": "Wong", "suffix": "", "affiliation": { "laboratory": "", "institution": "The Hong Kong Polytechnic University", "location": { "addrLine": "Hung Hom", "settlement": "Hong Kong" } }, "email": "kfwong@se.cuhk.edu.hk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Temporal question classification assigns time granularities to temporal questions according to their anticipated answers. It is very important for answer extraction and verification in the literature of temporal question answering. Other than simply distinguishing between \"date\" and \"period\", a more finegrained classification hierarchy scaling down from \"millions of years\" to \"second\" is proposed in this paper. Based on it, a SNoW-based classifier, combining user preference, word N-grams, granularity of time expressions, special patterns as well as event types, is built to choose appropriate time granularities for the ambiguous temporal questions, such as When-and How long-like questions. Evaluation on 194 such questions achieves 83.5% accuracy, almost close to manually tagging accuracy 86.2%. Experiments reveal that user preferences make significant contributions to time granularity classification.", "pdf_parse": { "paper_id": "I05-1037", "_pdf_hash": "", "abstract": [ { "text": "Temporal question classification assigns time granularities to temporal questions according to their anticipated answers. It is very important for answer extraction and verification in the literature of temporal question answering. Other than simply distinguishing between \"date\" and \"period\", a more finegrained classification hierarchy scaling down from \"millions of years\" to \"second\" is proposed in this paper. Based on it, a SNoW-based classifier, combining user preference, word N-grams, granularity of time expressions, special patterns as well as event types, is built to choose appropriate time granularities for the ambiguous temporal questions, such as When-and How long-like questions. Evaluation on 194 such questions achieves 83.5% accuracy, almost close to manually tagging accuracy 86.2%. Experiments reveal that user preferences make significant contributions to time granularity classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Temporal questions, such as the questions with the interrogatives \"when\", \"how long\" and \"which year\", seek for the occurrence time of the events or the temporal attributes of the entities. Temporal question classification plays an important role in the literature of question answering and temporal information processing. In the evaluation of TREC 10 Question-Answering (QA) track [1] , more than 10% of questions in the test question corpus are temporal questions. Different from TREC QA track, Workshop TERQAS (http://www.timeml.org/terqas/) particularly investigated on temporal question answering instead of a general one. It focused on temporal and event recognition in question answering systems and paid great attention to temporal relations among states, events and time expressions in temporal questions. TimeML (http://www.timeml.org), a temporal information (e.g. time expression, tense & aspect) annotation standard, has also been used for temporal question answering in this workshop [2] . Correct understanding of a temporal question will greatly help extracting and verifying its answers and certainly improve the performance of any question answering system. Look at the following examples.", "cite_spans": [ { "start": 383, "end": 386, "text": "[1]", "ref_id": "BIBREF1" }, { "start": 999, "end": 1002, "text": "[2]", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "[Ea]. What is the birthday of Abraham Lincoln? [Eb] . When did the Neanderthal man live?", "cite_spans": [ { "start": 47, "end": 51, "text": "[Eb]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In a general question answering system, the question classifier commonly classifies temporal questions into two classes, i.e. \"date\" and \"period\". With such a system, the above two questions are both assigned a \"date\". Whereas it is natural for the question [Ea] to be answered with a particular data (e.g. \"12/02/1809\"), it is not the case for question [Eb] , because a proper answer could be \"35,000 years ago\". However, if it is known that the time granularity concerned is \"thousands of years\", answer extraction turn to be more targeted. The need for a more fine-grained classification is obvious. Although there were different question classification hierarchies, as reported [3, 4, 12, 13, 14] , few inclined to introducing the classification hierarchy (e.g. \"year\", \"month\" and \"day\") which could give a clearer direction to guide answer extraction and verification of temporal questions. In the following, we try to find out whether temporal questions can be further classified into finer time granularity and how to classify them.", "cite_spans": [ { "start": 354, "end": 358, "text": "[Eb]", "ref_id": null }, { "start": 682, "end": 685, "text": "[3,", "ref_id": "BIBREF3" }, { "start": 686, "end": 688, "text": "4,", "ref_id": "BIBREF4" }, { "start": 689, "end": 692, "text": "12,", "ref_id": "BIBREF12" }, { "start": 693, "end": 696, "text": "13,", "ref_id": "BIBREF13" }, { "start": 697, "end": 700, "text": "14]", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "By examining a temporal question corpus consisting of 348 questions, 293 of which are gathered from UIUC question answering labelled data (http://l2r.cs. uiuc.edu/~cogcomp/Data/QA/QC), and the rest 55 from TREC 10 test corpus, we find two different cases. On the one hand, some questions are very straightforward in expressing the time granularities of the answers expected, e.g. the questions beginning with \"which year\" or \"for how many years\". On the other hand, some questions are not so obvious, e.g. the questions headed by \"when\" or \"for how long\". We call such questions ambiguous questions. Not surprisingly, the ambiguous When-and How long-like questions account for a large proportion in this temporal question corpus, i.e. 197 from 348 in total.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We further investigate on those 197 ambiguous questions in order to find out whether they can be classified into finer time granularity. Three experimenters are requested to tag a time granularity to each question independently 1 . Answers are not provided. The tag with two agreements is taken as the time granularity class of the corresponding temporal question. Otherwise the tag \"UNKNOWN\" is assigned. Reference answers for the questions are extracted from AltaVista Web Search (http://www.altavista.com). Comparing the time granularities tagged manually with those provided by the reference answers, we find that only 27 out of 197 questions are incorrectly tagged, in other words, the manually tagging accuracy is 86.2%. Errors exist though, the relatively high agreement between users' tagging and reference answers lights the hope of automatically determining the time granularities of temporal questions.", "cite_spans": [ { "start": 228, "end": 229, "text": "1", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Analysing the tagging results, it is revealed that the tagging errors arouse from three sources: insufficient world knowledge, different speaking habits and different expected information granularity among human. See the following examples: [Ec] , the time granularity should be \"thousands of years\", rather than \"year\". This error could be corrected if one knows that Neanderthal man existed 35,000 years ago. The time granularity of question [Ed] should be \"week\", but not \"month\" in accordance with the habit. For question [Ee], users' tag is \"year\", different from the reference answer's tag \"day\". However, both granularities are acceptable in commonsense, because the different users may want coarser or finer information. This observation suggests that incorporating question context, world knowledge, and speaking habits would help determine the time granularities of temporal questions.", "cite_spans": [ { "start": 241, "end": 245, "text": "[Ec]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose a fine-grained temporal question classification scheme, i.e. time granularity hierarchy, consisting of sixteen non-exclusive classes and scaling down from \"millions of years\" to \"second\". The SNoW-based classifier is then built to combine linguistic features (including word N-grams, granularity of time expressions and special patterns), user preferences and event types, and assign one of the sixteen classes to each temporal question. In our work, user preference, which characterizes world knowledge and speaking habits, is estimated by means of the time granularities of the entities and/or events involved. The SNoW-based classifier achieves 83.5% accuracy, almost close to 86.2% of manually tagging accuracy. Experiments also show that user preference makes a great contribution to time granularity classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of this paper is organized as follows. In the next section various related works in this literature are introduced. In Sect. 3, we demonstrate the time granularity hierarchy and principles. User preference is fully investigated in Sect. 4. Feature design is depicted in Sect. 5. Time granularity classifiers are introduced in Sect. 6 and the experiment results are presented in Sect. 7. We finally conclude this paper in the last section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In TREC QA track, almost every QA system joining in the evaluation has a question classification module. This makes question classification a hot topic. Questions can be classified from several aspects. Most classification hierarchies [3, 4, 12, 13, 14] adopt the anticipated answer types as its classification criteria. Abney et al. [4] gave a coarse classification hierarchy with seven classes (person, location, etc.). Hovy et al. [13] introduced a finer classification with forty-seven classes manually constructed from 17,000 practical questions. Li et al. [3] proposed a two-level classification hierarchy, a coarser one with six classes and a finer one with fifty classes. In all these classification hierarchies, temporal questions are simply classified into two classes, i.e. \"date\" and \"period\". Some works classified temporal questions from other aspects. In [2] , a temporal question classification hierarchy is proposed according to the temporal relation among state, event and time expression. In [5] , temporal questions are classified into three types with regard to question structure: non-temporal, simple and complex. Diaz F. et al. [6] did an interesting work on the statistics of the number of topics along timeline. According to whether questions or topics have a clear distribution along timeline, they can be classified into three types: atemporal, temporal clear and temporal ambiguous. Focusing on ambiguous temporal questions, e.g. when and how long-like questions, we introduce a classification hierarchy in terms of the anticipated answer types.", "cite_spans": [ { "start": 235, "end": 238, "text": "[3,", "ref_id": "BIBREF3" }, { "start": 239, "end": 241, "text": "4,", "ref_id": "BIBREF4" }, { "start": 242, "end": 245, "text": "12,", "ref_id": "BIBREF12" }, { "start": 246, "end": 249, "text": "13,", "ref_id": "BIBREF13" }, { "start": 250, "end": 253, "text": "14]", "ref_id": "BIBREF14" }, { "start": 334, "end": 337, "text": "[4]", "ref_id": "BIBREF4" }, { "start": 434, "end": 438, "text": "[13]", "ref_id": "BIBREF13" }, { "start": 562, "end": 565, "text": "[3]", "ref_id": "BIBREF3" }, { "start": 870, "end": 873, "text": "[2]", "ref_id": "BIBREF2" }, { "start": 1011, "end": 1014, "text": "[5]", "ref_id": "BIBREF5" }, { "start": 1152, "end": 1155, "text": "[6]", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "It is an extension of two classes \"date\" and \"period\" and includes sixteen nonexclusive classes scaling down from \"millions of years\" to \"second\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "Related to the work of features design, Li et al. [3] built the question classifier based on three types of features, including surface text (e.g. N-grams), syntactic features (e.g. part-of-speech and name entity tags), and semantic related words (words that often occur with a specific question class). Later works of Li et al. [10] introduced semantic information and world knowledge from external resources such as WordNet. In this paper, we introduce a new feature, user preference, which is expected to imply the world knowledge in time granularity in the experiment. User preference is estimated from statistics with which Diaz F. et al. [6] determine whether a question is temporal ambiguous or not. E. Saquete et al. [5] suggested that questions had different structures, i.e. non-temporal, simple and complex, which is helpful to handle questions more orderly. It gives us inspiration to use question focus, i.e. whether a question is event-based or entity-based.", "cite_spans": [ { "start": 50, "end": 53, "text": "[3]", "ref_id": "BIBREF3" }, { "start": 329, "end": 333, "text": "[10]", "ref_id": "BIBREF10" }, { "start": 644, "end": 647, "text": "[6]", "ref_id": "BIBREF6" }, { "start": 725, "end": 728, "text": "[5]", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "Many machine-learning methods have been used in question classification, such as language model [7] , SNoW [3, 10] , maximum entropy [15] and support vector machine [8, 9] . In our experiments, language model is selected as the baseline model, and SNoW is selected to tackle to the large feature space and build the classifier. In fact, SNoW has already been used in many other fields, such as text categorization, word sense disambiguation and even facial feature detection.", "cite_spans": [ { "start": 96, "end": 99, "text": "[7]", "ref_id": "BIBREF7" }, { "start": 107, "end": 110, "text": "[3,", "ref_id": "BIBREF3" }, { "start": 111, "end": 114, "text": "10]", "ref_id": "BIBREF10" }, { "start": 133, "end": 137, "text": "[15]", "ref_id": "BIBREF15" }, { "start": 165, "end": 168, "text": "[8,", "ref_id": "BIBREF8" }, { "start": 169, "end": 171, "text": "9]", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "2" }, { "text": "In traditional question answering systems, only two question types are time-related, i.e. \"date\" and \"period\". For the reasons explained in Sect. 1, we propose a more detailed temporal question classification scheme, namely time granularity hierarchy scaling down from \"millions of years\" to \"second\" in order to facilitate answer extraction and verification. The initial time granularity hierarchy includes the following twelve classes: \"second\", \"minute\", \"hour\", \"day\", \"week\", \"month\", \"season\", \"year\", \"decade\", \"century\", \"thousands of years\" and \"millions of years\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Time Granularity Hierarchy and Tagging Principles", "sec_num": "3" }, { "text": "Granularity \"weekday\" is added to the initial hierarchy because some temporal questions favor \"weekday\" instead of \"day\", although both of them indicate one day. Some questions favour a region of time granularity. Look at the following examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Time Granularity Hierarchy and Tagging Principles", "sec_num": "3" }, { "text": "[Ef]. What time of year has the most air travel? [Eg] . What time of day did Emperor Hirohito die?", "cite_spans": [ { "start": 49, "end": 53, "text": "[Eg]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Time Granularity Hierarchy and Tagging Principles", "sec_num": "3" }, { "text": "For [Ef] question, its time granularity could be \"season\", \"month\" or even \"day\"; and for question [Eg], the time granularity could be \"hour\" or \"minute\". We can only determine that their time granularities are less than \"year\" or \"day\" respectively, but cannot go any further. Such situations only occur to time granularity \"year\" and \"day\", so we expand the original classification hierarchy by adding another two types: \"less than day\", \"less than year\". Besides, the questions asking for festivals are classified into \"special date\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Time Granularity Hierarchy and Tagging Principles", "sec_num": "3" }, { "text": "Up to now, the time granularity hierarchy has sixteen classes. The less frequent temporal measures, such as \"microsecond\" and \"billions of years\" are ignored. As mentioned above, the class \"less than day\" overlaps several granularities, e.g. \"hour\" and \"minute\", so the time granularity hierarchy we proposed is non-exclusive.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Time Granularity Hierarchy and Tagging Principles", "sec_num": "3" }, { "text": "In reality, some temporal questions can be answered in several different time granularities. For example, question \"when was Abraham Lincoln born?\", its answers can be a \"day\" (\"12/02/1809\") or a \"year\" (\"1809\"). To resolve this confliction, we adopt two principles for time granularity annotation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Time Granularity Hierarchy and Tagging Principles", "sec_num": "3" }, { "text": "[Pa]. Assign the minimum time granularity we can determine to a given temporal question if several time granularities are applicable. [Pb] . Select the time granularity with regard to speaking habits or user preferences.", "cite_spans": [ { "start": 134, "end": 138, "text": "[Pb]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Time Granularity Hierarchy and Tagging Principles", "sec_num": "3" }, { "text": "When the two principles conflict to each other, principle [Pb] takes the priority. With principle [Pa], time granularity of the above question can only be \"day\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Time Granularity Hierarchy and Tagging Principles", "sec_num": "3" }, { "text": "In general, temporal questions have two different focuses: entity-based and event-based.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User Preference", "sec_num": "4" }, { "text": "[a]. Entity-based question: temporal interrogative words + (be) + entity, e.g.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User Preference", "sec_num": "4" }, { "text": "\"When was the World War II?\" [b]. Event-based question: temporal interrogatives + event, e.g. \"When did Mount St. Helen last have a significant eruption?\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User Preference", "sec_num": "4" }, { "text": "Time granularities of entities (or events) have great significance to those of entitybased (or event-based) temporal questions. So, in the following, we make estimation of the time granularities of entities and events from statistics, based on the intuition that some entities or events may favor certain types of time granularities, which is called user preference here.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User Preference", "sec_num": "4" }, { "text": "The time granularity of the entity is derived by counting the co-occurrences of the entity and time granularities. The statistics is gathered from AltaVista Web Search. The sentences containing both the entity and time expressions are extracted from the first one hundred results returned by AltaVista with the entity as the searching keyword. The probability P of a time granularity class tg i on the occurrence of the entity is calculated as the following Equation (1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Time Granularity of Entities", "sec_num": "4.1.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ") ( # ) ( # ) | ( entity entity tg entity tg P i i \u2229 = ) | ( max ) ( entity tg P Arg entity TG i tg i =", "eq_num": "(1)" } ], "section": "Time Granularity of Entities", "sec_num": "4.1.1" }, { "text": "#( ) is the number of the sentences containing the expressions between the parenthesis. TG(entity) represents the time granularity of the entity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Time Granularity of Entities", "sec_num": "4.1.1" }, { "text": "The time granularities of the events are not directly extracted as what is done to the entities, because they have little chance to be reused on the observation that there are rarely two identical events in a question corpus. As an alternative, the time granularity of an event is estimated from a sequence of entity-verb-entity' approximating the event. The time granularity of the verb is determined as Equation (1) by substituting \"verb\" for \"entity\". We choose two strategies for the estimation: maximum product and one-win-all.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Time Granularity of Events", "sec_num": "4.1.2" }, { "text": "Maximum product: (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Time Granularity of Events", "sec_num": "4.1.2" }, { "text": ") ' | ( ) | ( ) | ( 1 ) | (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Time Granularity of Events", "sec_num": "4.1.2" }, { "text": "Equation 1is smoothed in order to avoid 0 values in Equation 2. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Time Granularity of Events", "sec_num": "4.1.2" }, { "text": "+ \u2229 = ) ( # 1 ) ( # ) | ( (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Time Granularity of Events", "sec_num": "4.1.2" }, { "text": "t is the number of the time granularity classes, w is either an entity or a verb.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Time Granularity of Events", "sec_num": "4.1.2" }, { "text": "In the 197 ambiguous questions, 12 questions are entity-based, and the rest 185 questions are event-based. If all the 197 questions are arbitrarily assigned a tag \"year\", the tagging accuracy is 48.2%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment: Evaluating the Estimation", "sec_num": "4.1.3" }, { "text": "For each entity-based or event-based question, the time granularity of the entity or event within it are assumed as the time granularity of the question. Compared with the time granularity of the reference answer, for the entity-based questions, we achieve 75% accuracy; for the event-based question, the accuracy of maximum product strategy and one-win-all strategy are 67.0% and 64.3% respectively. It seems that maximum product strategy is more effective than one-win-all strategy in this application. With maximum product strategy, the overall accuracy on all the 197 ambiguous questions is 67.4%. Notice that the accuracy of arbitrarily tagging is only 48.2%, so the estimation of the time granularities of the entities and the events is useful for determining the time granularities of temporal questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment: Evaluating the Estimation", "sec_num": "4.1.3" }, { "text": "In the experiments of estimation, we find that some entities or events tend to favor only one certain time granularity, some others tend to favor several time granularities, and the rest may have a uniform distribution almost on every time granularity. In Fig. 1(a) , time granularity \"day\" takes a preponderant proportion, i.e. more than 80%, in the distribution of \"gestation\", which is called single-peak-distribution. In Fig. 1(b) , both \"day\" and \"year\" take a large proportion, so \"Lincoln born\" is multipeak-distributed. In Fig. 1(c) , for \"take place\", all the time granularities almost take a similar proportion and it is a uniform distribution.", "cite_spans": [], "ref_spans": [ { "start": 256, "end": 265, "text": "Fig. 1(a)", "ref_id": "FIGREF1" }, { "start": 425, "end": 434, "text": "Fig. 1(b)", "ref_id": "FIGREF1" }, { "start": 531, "end": 540, "text": "Fig. 1(c)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Observation of Distribution", "sec_num": "4.2.1" }, { "text": "Assume an entity (or event) E, its possible time granularities {tg i , i=1,\u2026t} and the corresponding probabilities {P i , i=1,\u2026t} (calculated by Equation 1 and 2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments on Distribution", "sec_num": "4.2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2211 = i i P t 1 \u00b5 ; \u2211 = i i P I d ) , ( \u00b5 ; \u00b5 \u00b5 \u00b5 \u2264 > \u23a9 \u23a8 \u23a7 = i i i P P P I 0 1 ) , (", "eq_num": "(5)" } ], "section": "Experiments on Distribution", "sec_num": "4.2.2" }, { "text": "d is the number of time granularities tg i with higher probability P i than average probability \u00b5 . For simplicity, distribution D E of the time granularity of E is determined as follows, Observing the experiment results in Sect. 4.1.3, 88.7%, 56.3% and 18.9% accuracy are achieved on the questions within which the time granularities of the entities or events are estimated to be single-peak-, multi-peak-, and uniform-distributed respectively. So whether the estimated time granularity of the entity or event is single-peak-, multi-peak-, or uniform-distributed highlights the confidence on the estimation, which can be taken as a feature associated with the estimation of the time granularities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments on Distribution", "sec_num": "4.2.2" }, { "text": "As described in the above section, estimation of the time granularities of the entities and the events is useful for determining the time granularities of temporal questions; whether a question is entity-based or not and the distribution of time granularities of the entities and events within the questions will also be taken as associated features. These three features are named user preference feature in total. Besides, another four types of features are considered.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Feature Design", "sec_num": "5" }, { "text": "Word N-grams feature, e.g. unigram and bigram is the most straightforward feature and commonly used in question classification. In general question classification, unigram \"when\" indicates a temporal question. In temporal question classification, unigram \"birthday\" always implies a \"day\" while bigram \"when \u2026 born\" is a strong evidence of the time granularity \"day\". From this aspect, word N-grams also reflect user preference on time granularity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word N-grams", "sec_num": null }, { "text": "Time expressions are common in temporal questions, e.g. \"July 11, 1998\" and date modifier \"1998\" in \"1998 Superbowl\". We take the granularities of time expressions as features, for example, TG(\"in 1998\") = \"year\" TG(\"July 11, 1998\") = \"day\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Granularity of Time Expressions", "sec_num": null }, { "text": "Granularities of time expressions impose the constraints on the time granularities of temporal questions. If there is a time expression whose time granularity is tg in a temporal question, time granularity of this question can not be tg. For example, question \"When is the 1998 SuperBowl?\", its time granularity can not be \"Year\", i.e. the time granularity of \"1998\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Granularity of Time Expressions", "sec_num": null }, { "text": "In word N-gram features, words are equally processed, however, some special words combining with the verbs or the temporal connectives (e.g. \"when\", \"before\" and \"since\") will produce special patterns and affect the time granularities of temporal questions. Look at the following examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Special Patterns", "sec_num": null }, { "text": "[Eh]. Since when hasn't John Sununu been able to fly on government planes for personal business? [Ei] . What time of the day does Michael Milken typically wake up? For question [Eh], the temporal preposition \"since\" combined with \"when\" highlights that this question is seeking for a beginning point time, which implies a finer time granularity; for question [Ei], \"typically\" combined with verb \"wake up\" indicates a generally occurred event, and implies that its time granularity could be \"less than day\" or \"less than year\".", "cite_spans": [ { "start": 97, "end": 101, "text": "[Ei]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Special Patterns", "sec_num": null }, { "text": "In general, there are four event types: states, activities, accomplishments, and achievements. States and activities favour larger time granularities, while accomplishments and achievements favour smaller ones. For example, the activity \"stay\" will favour larger time granularity than the accomplishment event \"take place\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Event Types", "sec_num": null }, { "text": "In this work, we choose the Sparse Network of Winnow (SNoW) model as the time granularity classifier and compare it with a commonly used Language Model (LM) classifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classifier Building", "sec_num": "6" }, { "text": "As language model has already been used in question classification [7] , it is taken as the baseline model in the experiments. Language model mainly combines two types of features, i.e. unigram and bigram. Given a temporal question Q, its time granularity TG(Q) is calculated by Equation 7.", "cite_spans": [ { "start": 67, "end": 70, "text": "[7]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Language Model (LM)", "sec_num": "6.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "= = + = = \u2212 + = n j j j j i m j j j i tg w w tg P w tg P Arg Q TG i 1 1 1 ) | ( ) 1 ( ) | ( max ) ( \u03bb \u03bb", "eq_num": "(7)" } ], "section": "\u220f \u220f", "sec_num": null }, { "text": "w represents words. m and n are the numbers of unigrams and bigrams in questions respectively. \u03bb assigns different weights to unigrams and bigrams. In the experiment, best accuracy is achieved when 7 . 0 = \u03bb (see Sect. 7.3.1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u220f \u220f", "sec_num": null }, { "text": "SNoW is a learning framework and applicable to the tasks with a very large number of features. w , is weight of feature i connected with class c, which is learned from the training corpus. SNoW has already been used in question classification [3, 10] and good results are reported. As mentioned in Sect. 5, five types of features are selected for our task. They are altogether counted to more than ten thousand features. Since it is a large feature set, SNoW is a good choice.", "cite_spans": [ { "start": 243, "end": 246, "text": "[3,", "ref_id": "BIBREF3" }, { "start": 247, "end": 250, "text": "10]", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Sparse Network of Winnow (SNoW)", "sec_num": "6.2" }, { "text": "In this 348-question-corpus (see Sect. 1), time granularities of 151 questions are straightforward, while those of the rest 197 questions are ambiguous. For the sixteen time granularity classes, we only consider ten classes including more than four questions. Questions with unconsidered time granularity classes excluded, the question corpus has 339 questions in total, 145 for training and 194 for testing. As a result, the task is to learn a model from the 145-question training corpus and classify questions in the 194-question test corpus into ten classes: \"second\", \"minute\", \"hour\", \"day\", \"weekday\", \"week\", \"month\", \"season\", \"year\" and \"century\". The SNoW classifier is downloaded from UIUC (http://l2r.cs.uiuc.edu/~cogcomp/download.php?key=SNOW).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setup", "sec_num": "7.1" }, { "text": "The primary evaluation standard is accuracy 1 , i.e. the proportion of the correct classified questions out of the test questions (see Equation 9) . However, if a question seeking for a finer time granularity, e.g. \"day\", has been incorrectly determined as a coarser one, e.g. \"year\", it should also be taken as partly correct, which is reflected in accuracy 2 (see Equation 10). is the rank of the time granularity class Q tg , scaling down from \"millions of years\" to \"second\". Rank of \"second\" is 1, while rank of \"year\" is 9. The ranks of the last three time granularities, i.e. \"special date\", \"less than day\" and \"less than year\" are 14, 15 and 16 respectively. Likewise, ) ' ( Q tg R is the rank of ' Q tg .", "cite_spans": [ { "start": 135, "end": 146, "text": "Equation 9)", "ref_id": null } ], "ref_spans": [ { "start": 682, "end": 692, "text": "( Q tg R", "ref_id": null } ], "eq_spans": [], "section": "Evaluation Criteria", "sec_num": "7.2" }, { "text": "In the experiments, language model is taken as the baseline model. Performance of SNoW-based classifier will be compared with that of language model. Different combinations of features are tested in SNoW-based classifier and their performances are investigated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results and Analysis", "sec_num": "7.3" }, { "text": "The LM classifier takes two types of features: unigram and bigram. Experiment results are presented in Fig. 2 .", "cite_spans": [], "ref_spans": [ { "start": 103, "end": 109, "text": "Fig. 2", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "LM Classifier", "sec_num": "7.3.1" }, { "text": "Accuracy varies with different feature weight \u03bb and best accuracy (accuracy 1 68.0% and accuracy 2 68.9%) achieves when \u03bb =0.7. Accuracy when \u03bb =1.0 is higher than that when \u03bb =0. It indicates that, in the framework of language model, unigrams achieves better performance than bigrams, which accounts from the sparseness of bigram features. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LM Classifier", "sec_num": "7.3.1" }, { "text": "Our SNoW classifier requires binary features. We then encode each feature with an integer label. When a feature is observed in a question, its label will appear in the extracted feature set of this question. There are six types of features: 15 user preferences (10 for the estimation of time granularities, 3 for the estimation distributions, and 2 for question focuses) (F 1 ), 951 unigrams (F 2 ), 9277 bigrams (F 3 ), 10 granularity of time expressions (F 4 ), 14 special patterns (F 5 ), and 4 event types (F 6 ). Although the number of all features is more than ten thousand, the features in one question are no more than twenty in general. Accuracies of SNoW classifier on 194 test questions are presented in Table 1 . It shows that simply using unigram features, SNoW classifier has already achieved better accuracy than LM classifier (accuracy 1 : 69.5% vs. 68.0%; accuracy 2 : 70.3% vs. 68.9%). From this view, SNoW classifier outperforms LM classifier in handling sparse features. When all the six types of features are used, SNoW classifier achieves 83.5% in accuracy 1 and 83.9% in accuracy 2 , almost close to the accuracy of user tagging, i.e. 86.2%. With all the six types of features, accuracy 1 on the questions with different types of time granularity is illustrated in Table 2 . It reveals that the classification errors mainly come from time granularity of \"month\", \"day\" and \"century\". Low accuracy on \"month\" and \"century\" accounts from absence of enough examples, i.e. examples for training and testing both less than five. Many \"day\" questions are incorrectly classified into \"year\", which accounts for the low accuracy on \"day\". The reason lies in that there are more \"year\" questions than \"day\" questions in the training question corpus (116 vs. 56).", "cite_spans": [], "ref_spans": [ { "start": 715, "end": 722, "text": "Table 1", "ref_id": null }, { "start": 1288, "end": 1295, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "SNOW Classifier", "sec_num": "7.3.2" }, { "text": "In general, we can extract three F 1 features, one F 4 feature, less than two F 5 features, and one F 6 feature from one question. It is hard for SNoW classifier to train and test independently on each of these types of the features because of the small feature number in one example question. However, the numbers of F 2 and F 3 features in a question are normally more than ten. So we take unigrams (F 2 ) and bigrams (F 3 ) as the basic feature set. Table 3 presents the accuracy when the rest four types of features are added into the basic feature set respectively. As expected user preference makes the most significant improvement, 7.82% in accuracy 1 and 7.90% in accuracy 2 . Special patterns also play an important role, which makes 2.6% accuracy 1 improvement. It is strange that event type makes such a modest improvement (0.5%). After analyzing the experimental results, we find that as there are only four event types, it makes limited contribution to 10-class time granularity classification.", "cite_spans": [], "ref_spans": [ { "start": 453, "end": 460, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "SNOW Classifier", "sec_num": "7.3.2" }, { "text": "Various features for time granularity classification of temporal questions are investigated in this paper. User preference is shown to make a significant contribution to classification performance. SNoW classifier, combining user preference, word", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "The granularity hierarchy and the tagging principle will be detailed later.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This project is partially supported by Hong Kong RGC CERG (Grant No: PolyU5181/03E), and partially by CUHK Direct Grant (No: 2050330).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "N-grams, granularity of time expressions, special patterns and event types, achieves 83.5% accuracy in classification, close to manually tagging accuracy 86", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "N-grams, granularity of time expressions, special patterns and event types, achieves 83.5% accuracy in classification, close to manually tagging accuracy 86.2%.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The TREC-8 Question Answering Track Evaluation. Text Retrieval Conference TREC-8", "authors": [], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "TREC (ed.): The TREC-8 Question Answering Track Evaluation. Text Retrieval Confer- ence TREC-8, Gaithersburg, MD (1999)", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Using TimeML in Question Answering", "authors": [ { "first": "D", "middle": [], "last": "Radev", "suffix": "" }, { "first": "B", "middle": [], "last": "Sundheim", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Radev D. and Sundheim B.: Using TimeML in Question Answering. http://www.cs.brandeis.edu/~jamesp/arda/time/documentation/TimeML-use-in-qa- v1.0.pdf, (2002)", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Learning Question Classifiers", "authors": [ { "first": "X", "middle": [], "last": "Li", "suffix": "" }, { "first": "D", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 19th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "556--562", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, X. and Roth, D.: Learning Question Classifiers. Proceedings of the 19th International Conference on Computational Linguistics (2002) 556-562", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Answer Extraction. Proceedings of the 6th ANLP Conference", "authors": [ { "first": "S", "middle": [], "last": "Abney", "suffix": "" }, { "first": "M", "middle": [], "last": "Collins", "suffix": "" }, { "first": "A", "middle": [], "last": "Singhal", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "296--301", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Abney, M. Collins, and A. Singhal: Answer Extraction. Proceedings of the 6th ANLP Conference (2000) 296-301", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Splitting Complex Temporal Questions for Question Answering Systems", "authors": [ { "first": "E", "middle": [], "last": "Saquete", "suffix": "" }, { "first": "P", "middle": [], "last": "Mart\u00ednez-Barco", "suffix": "" }, { "first": "R", "middle": [], "last": "Mu\u00f1oz", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "567--574", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saquete E., Mart\u00ednez-Barco P., Mu\u00f1oz R.: Splitting Complex Temporal Questions for Question Answering Systems. Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (2004) 567-574", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Temporal Profiles of Queries", "authors": [ { "first": "F", "middle": [], "last": "Diaz", "suffix": "" }, { "first": "R", "middle": [], "last": "Jones", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diaz, F. and Jones, R.: Temporal Profiles of Queries. Yahoo! Research Labs Technical Report YRL-2004-022 (2004)", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Question Classification Using Language Modeling", "authors": [ { "first": "Wei", "middle": [], "last": "Li", "suffix": "" } ], "year": 2002, "venue": "CIIR Technical Report", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Li: Question Classification Using Language Modeling. CIIR Technical Report (2002)", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Question Classification Using Support Vector Machines", "authors": [ { "first": "Dell", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Wee", "middle": [], "last": "Sun Lee", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "26--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dell Zhang and Wee Sun Lee: Question Classification Using Support Vector Machines. Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (2003) 26-32", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Question Classification Using HDAG Kernel. Proceedings of Workshop on Multilingual Summarization and Question Answering", "authors": [ { "first": "Jun", "middle": [], "last": "Suzuki", "suffix": "" }, { "first": "Hirotoshi", "middle": [], "last": "Taira", "suffix": "" }, { "first": "Yutaka", "middle": [], "last": "Sasaki", "suffix": "" }, { "first": "Eisaku", "middle": [], "last": "Maeda", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "61--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jun Suzuki, Hirotoshi Taira, Yutaka Sasaki, and Eisaku Maeda: Question Classification Using HDAG Kernel. Proceedings of Workshop on Multilingual Summarization and Question Answering (2003) 61-68", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The Role of Semantic Information in Learning Question Classifiers", "authors": [ { "first": "X", "middle": [], "last": "Li", "suffix": "" }, { "first": "D", "middle": [], "last": "Roth", "suffix": "" }, { "first": "K", "middle": [], "last": "Small", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li X., Roth D., and Small K.: The Role of Semantic Information in Learning Question Classifiers. Proceedings of the International Joint Conference on Natural Language Proc- essing (2004)", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Temporal Information Extraction for Temporal Question Answering", "authors": [ { "first": "Frank", "middle": [ "&" ], "last": "Schilder", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Habel", "suffix": "" } ], "year": 2003, "venue": "New Directions in Question Answering. Papers from the 2003 AAAI Spring Symposium", "volume": "", "issue": "", "pages": "34--44", "other_ids": {}, "num": null, "urls": [], "raw_text": "Schilder, Frank & Habel, Christopher: Temporal Information Extraction for Temporal Question Answering. In New Directions in Question Answering. Papers from the 2003 AAAI Spring Symposium TR SS-03-07 (2003) 34-44", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A Question Answering System Supported by Information Extraction", "authors": [ { "first": "K", "middle": [], "last": "Rohini", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Srihari", "suffix": "" }, { "first": "", "middle": [], "last": "Li", "suffix": "" } ], "year": 2000, "venue": "Proceedings of Association for Computational Linguistics", "volume": "", "issue": "", "pages": "166--172", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rohini K. Srihari, Wei Li: A Question Answering System Supported by Information Ex- traction. Proceedings of Association for Computational Linguistics (2000) 166-172", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Ulf Hermjakob, Chin-Yew Lin, and Deepak Ravichandran: Towards Semantics-Based Answer Pinpointing", "authors": [ { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Laurie", "middle": [], "last": "Geber", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the DARPA Human Language Technology Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eduard Hovy, Laurie Geber, Ulf Hermjakob, Chin-Yew Lin, and Deepak Ravichandran: Towards Semantics-Based Answer Pinpointing. Proceedings of the DARPA Human Lan- guage Technology Conference (2001)", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Parsing and Question Classification for Question Answering", "authors": [ { "first": "U", "middle": [], "last": "Hermjacob", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the Association for Computational Linguists Workshop on Open-Domain Question Answering", "volume": "", "issue": "", "pages": "17--22", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hermjacob U.: Parsing and Question Classification for Question Answering. Proceedings of the Association for Computational Linguists Workshop on Open-Domain Question An- swering (2001) 17-22", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Question Answering Using Maximum Entropy Components. Proceedings of the North American chapter of the Association for Computational Linguistics", "authors": [ { "first": "Franz", "middle": [ "M" ], "last": "Ittycheriah", "suffix": "" }, { "first": "W", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "A", "middle": [], "last": "Ratnaparki", "suffix": "" }, { "first": "R", "middle": [], "last": "Mammone", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "33--39", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ittycheriah, Franz M., Zhu W., Ratnaparki A. and Mammone R.: Question Answering Us- ing Maximum Entropy Components. Proceedings of the North American chapter of the Association for Computational Linguistics (2001) 33-39", "links": null } }, "ref_entries": { "FIGREF1": { "text": "Distribution of the time granularities of the entities and events", "num": null, "type_str": "figure", "uris": null }, "FIGREF3": { "text": "It selects active features by updating weights of features, and learns a linear function from a corpus consisting of positive and negative examples. Let Ac={i 1 , \u2026, i m } be the set of features that are active and linked to target class c. Let s i be the real valued strength associated with feature in the example. Then the example's class is c if and only if,", "num": null, "type_str": "figure", "uris": null }, "FIGREF5": { "text": "Accuracy of LM classifier. Data in circle is the best performance achieved.", "num": null, "type_str": "figure", "uris": null }, "TABREF1": { "text": "TG(event) represents time granularity of event. Z is used for normalization.", "num": null, "html": null, "content": "
Ptgievent=ZPtgientityPtgiverbPtgientity
) event ( TG=Argmax tg i( tg Pi|) event(2)
One-win-all:) event ( TG=max Arg tg i( tg P { i|), entity( tg P i|), verb( tg P i|' entity)}
", "type_str": "table" }, "TABREF3": { "text": "Accuracy (%) of SNoW classifier Accuracy 1 (%) on different types of time granularities Accuracy (%) on combination of different types of features", "num": null, "html": null, "content": "
Feature SetF 2F 2, 3F 1~6
Accuracy 169.572.183.5
Accuracy 270.372.783.9
TGsecondminutehourdayweekday
Accuracy 110010010064.2100
TGweekmonthseasonyearcentury
Accuracy 11006010090.566.7
Feature SetF 2,3F 1,2,3F 2,3,4F 2,3,5F 2,3,6
Accuracy 172.179.873.774.772.6
Accuracy 272.780.674.775.273.1
", "type_str": "table" } } } }