{ "paper_id": "N06-1005", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:45:33.348280Z" }, "title": "Effectively Using Syntax for Recognizing False Entailment", "authors": [ { "first": "Rion", "middle": [], "last": "Snow", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University Stanford", "location": { "postCode": "94305", "region": "CA" } }, "email": "" }, { "first": "Lucy", "middle": [], "last": "Vanderwende", "suffix": "", "affiliation": { "laboratory": "", "institution": "Microsoft Research", "location": { "addrLine": "One Microsoft Way Redmond", "postCode": "98027", "region": "WA" } }, "email": "lucyv@microsoft.com" }, { "first": "Arul", "middle": [], "last": "Menezes", "suffix": "", "affiliation": { "laboratory": "", "institution": "Microsoft Research", "location": { "addrLine": "One Microsoft Way Redmond", "postCode": "98027", "region": "WA" } }, "email": "arulm@microsoft.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Recognizing textual entailment is a challenging problem and a fundamental component of many applications in natural language processing. We present a novel framework for recognizing textual entailment that focuses on the use of syntactic heuristics to recognize false entailment. We give a thorough analysis of our system, which demonstrates state-of-the-art performance on a widely-used test set.", "pdf_parse": { "paper_id": "N06-1005", "_pdf_hash": "", "abstract": [ { "text": "Recognizing textual entailment is a challenging problem and a fundamental component of many applications in natural language processing. We present a novel framework for recognizing textual entailment that focuses on the use of syntactic heuristics to recognize false entailment. We give a thorough analysis of our system, which demonstrates state-of-the-art performance on a widely-used test set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Recognizing the semantic equivalence of two fragments of text is a fundamental component of many applications in natural language processing. Recognizing textual entailment, as formulated in the recent PASCAL Challenge 1 , is the problem of determining whether some text sentence T entails some hypothesis sentence H.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The motivation for this formulation was to isolate and evaluate the application-independent component of semantic inference shared across many application areas, reflected in the division of the PAS-CAL RTE dataset into seven distinct tasks: Information Extraction (IE), Comparable Documents (CD), Reading Comprehension (RC), Machine Translation (MT), Information Retrieval (IR), Question Answering (QA), and Paraphrase Acquisition (PP).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The RTE problem as presented in the PASCAL RTE dataset is particularly attractive in that it is a reasonably simple task for human annotators with high inter-annotator agreement (95.1% in one independent labeling (Bos and Markert, 2005) ), but an extremely challenging task for automated systems. The highest accuracy systems on the RTE test set are still much closer in performance to a random baseline accuracy of 50% than to the inter-annotator agreement. For example, two high-accuracy systems are those described in (Tatu and Moldovan, 2005) , achieving 60.4% accuracy with no task-specific information, and (Bos and Markert, 2005) , which achieves 61.2% task-dependent accuracy, i.e. when able to use the specific task labels as input.", "cite_spans": [ { "start": 213, "end": 236, "text": "(Bos and Markert, 2005)", "ref_id": "BIBREF2" }, { "start": 521, "end": 546, "text": "(Tatu and Moldovan, 2005)", "ref_id": "BIBREF13" }, { "start": 613, "end": 636, "text": "(Bos and Markert, 2005)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Previous systems for RTE have attempted a wide variety of strategies. Many previous approaches have used a logical form representation of the text and hypothesis sentences, focusing on deriving a proof by which one can infer the hypothesis logical form from the text logical form (Bayer et al., 2005; Bos and Markert, 2005; Raina et al., 2005; Tatu and Moldovan, 2005) . These papers often cite that a major obstacle to accurate theorem proving for the task of textual entailment is the lack of world knowledge, which is frequently difficult and costly to obtain and encode. Attempts have been made to remedy this deficit through various techniques, including modelbuilding (Bos and Markert, 2005) and the addition of semantic axioms (Tatu and Moldovan, 2005) .", "cite_spans": [ { "start": 280, "end": 300, "text": "(Bayer et al., 2005;", "ref_id": "BIBREF1" }, { "start": 301, "end": 323, "text": "Bos and Markert, 2005;", "ref_id": "BIBREF2" }, { "start": 324, "end": 343, "text": "Raina et al., 2005;", "ref_id": "BIBREF10" }, { "start": 344, "end": 368, "text": "Tatu and Moldovan, 2005)", "ref_id": "BIBREF13" }, { "start": 674, "end": 697, "text": "(Bos and Markert, 2005)", "ref_id": "BIBREF2" }, { "start": 734, "end": 759, "text": "(Tatu and Moldovan, 2005)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our system diverges from previous approaches most strongly by focusing upon false entailments; rather than assuming that a given entailment is false until proven true, we make the opposite assump-tion, and instead focus on applying knowledge-free heuristics that can act locally on a subgraph of syntactic dependencies to determine with high confidence that the entailment is false. Our approach is inspired by an analysis of the RTE dataset that suggested a syntax-based approach should be approximately twice as effective at predicting false entailment as true entailment (Vanderwende and Dolan, 2006) . The analysis implied that a great deal of syntactic information remained unexploited by existing systems, but gave few explicit suggestions on how syntactic information should be applied; this paper provides a starting point for creating the heuristics capable of obtaining the bound they suggest 2 .", "cite_spans": [ { "start": 574, "end": 603, "text": "(Vanderwende and Dolan, 2006)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Similar to most other syntax-based approaches to recognizing textual entailment, we begin by representing each text and hypothesis sentence pair in logical forms. These logical forms are generated using NLPWIN 3 , a robust system for natural language parsing and generation (Heidorn, 2000) . Our logical form representation may be considered equivalently as a set of triples of the form RELATION(node i , node j ), or as a graph of syntactic dependencies; we use both terminologies interchangeably. Our algorithm proceeds as follows:", "cite_spans": [ { "start": 274, "end": 289, "text": "(Heidorn, 2000)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "System Description", "sec_num": "2" }, { "text": "1. Parse each sentence with the NLPWIN parser, resulting in syntactic dependency graphs for the text and hypothesis sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Description", "sec_num": "2" }, { "text": "2. Attempt an alignment of each content node in the dependency graph of the hypothesis sentence to some node in the graph of the text sentence, using a set of heuristics for alignment (described in Section 3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Description", "sec_num": "2" }, { "text": "3. Using the alignment, apply a set of syntactic heuristics for recognizing false entailment (described in Section 4); if any match, predict that the entailment is false.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Description", "sec_num": "2" }, { "text": "2 (Vanderwende and Dolan, 2006) suggest that the truth or falsehood of 48% of the entailment examples in the RTE test set could be correctly identified via syntax and a thesaurus alone; thus by random guessing on the rest of the examples one might hope for an accuracy level of 0.48 + 0.52 2 = 74%. Figure 1 : Logical form produced by NLPWIN for the sentence \"Six hostages in Iraq were freed.\"", "cite_spans": [ { "start": 2, "end": 31, "text": "(Vanderwende and Dolan, 2006)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 299, "end": 307, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "System Description", "sec_num": "2" }, { "text": "4. If no syntactic heuristic matches, back off to a lexical similarity model (described in section 5.1), with an attempt to align detected paraphrases (described in section 5.2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Description", "sec_num": "2" }, { "text": "In addition to the typical syntactic information provided by a dependency parser, the NLPWIN parser provides an extensive number of semantic features obtained from various linguistic resources, creating a rich environment for feature engineering. For example, Figure 1 (from Dev Ex. #616) illustrates the dependency graph representation we use, demonstrating the stemming, part-of-speech tagging, syntactic relationship identification, and semantic feature tagging capabilities of NLPWIN. We define a content node to be any node whose lemma is not on a small stoplist of common stop words. In addition to content vs. non-content nodes, among content nodes we distinguish between entities and nonentities: an entity node is any node classified by the NLPWIN parser as being a proper noun, quantity, or time.", "cite_spans": [], "ref_spans": [ { "start": 260, "end": 268, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "System Description", "sec_num": "2" }, { "text": "Each of the features of our system were developed from inspection of sentence pairs from the RTE development data set, and used in the final system only if they improved the system's accuracy on the development set (or improved F-score if accuracy was unchanged); sentence pairs in the RTE test set were left uninspected and used for testing purposes only.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Description", "sec_num": "2" }, { "text": "Our syntactic heuristics for recognizing false entailment rely heavily on the correct alignment of words and multiword units between the text and hypothesis logical forms. In the notation below, we will consider h and t to be nodes in the hypothesis H and text T logical forms, respectively. To accomplish the task of node alignment we rely on the following heuristics:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linguistic cues for node alignment", "sec_num": "3" }, { "text": "As in (Herrera et al., 2005) and others, we align a node h \u2208 H to any node t \u2208 T that has both the same part of speech and belongs to the same synset in WordNet. Our alignment considers multiword units, including compound nouns (e.g., we align \"Oscar\" to \"Academy Award\" as in Figure 2 ), as well as verb-particle constructions such as \"set off\" (aligned to \"trigger\" in Test Ex. #1983).", "cite_spans": [ { "start": 6, "end": 28, "text": "(Herrera et al., 2005)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 277, "end": 285, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "WordNet synonym match", "sec_num": "3.1" }, { "text": "The NLPWIN parser assigns a normalized numeric value feature to each piece of text inferred to correspond to a numeric value; this allows us to align \"6th\" to \"sixth\" in Test Ex. #1175. and to align \"a dozen\" to \"twelve\" in Test Ex. #1231.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Numeric value match", "sec_num": "3.2" }, { "text": "Many acronyms are recognized using the synonym match described above; nonetheless, many acronyms are not yet in WordNet. For these cases we have a specialized acronym match heuristic which aligns pairs of nodes with the following properties: if the lemma for some node h consists only of capitalized letters (with possible interceding periods), and the letters correspond to the first characters of some multiword lemma for some t \u2208 T , then we consider h and t to be aligned. This heuristic allows us to align \"UNDP\" to \"United Nations Development Programme\" in Dev Ex. #357 and \"ANC\" to \"African National Congress\" in Test Ex. #1300.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acronym match", "sec_num": "3.3" }, { "text": "We would like to align words which have the same root form (or have a synonym with the same root form) and which possess similar semantic meaning, but which may belong to different syntactic categories. We perform this by using a combination of the synonym and derivationally-related form information contained within WordNet. Explicitly our procedure for constructing the set of derivationallyrelated forms for a node h is to take the union of all derivationally-related forms of all the synonyms of h (including h itself), i.e.:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Derivational form match", "sec_num": "3.4" }, { "text": "DERIV(h) = \u222a s\u2208WN-SYN(h) WN-DERIV(s)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Derivational form match", "sec_num": "3.4" }, { "text": "In addition to the noun/verb derivationally-related forms, we detect adjective/adverb derivationallyrelated forms that differ only by the suffix 'ly'. Unlike the previous alignment heuristics, we do not expect that two nodes aligned via derivationallyrelated forms will play the same syntactic role in their respective sentences. Thus we consider two nodes aligned in this way to be soft-aligned, and we do not attempt to apply our false entailment recognition heuristics to nodes aligned in this way.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Derivational form match", "sec_num": "3.4" }, { "text": "As a special case of derivational form match, we soft-align matches from an explicit list of place names, adjectival forms, and demonyms 4 ; e.g., \"Sweden\" and \"Swedish\" in Test Ex. #1576.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Country adjectival form / demonym match", "sec_num": "3.5" }, { "text": "In addition to these heuristics, we implemented a hyponym match heuristic similar to that discussed in (Herrera et al., 2005) , and a heuristic based on the string-edit distance of two lemmas; however, these heuristics yielded a decrease in our system's accuracy on the development set and were thus left out of our final system.", "cite_spans": [ { "start": 103, "end": 125, "text": "(Herrera et al., 2005)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Other heuristics for alignment", "sec_num": "3.6" }, { "text": "The bulk of our system focuses on heuristics for recognizing false entailment. For purposes of notation, we define binary functions for the existence Unaligned Entity: t) to be true if and only if the node h \u2208 H has been 'hard-aligned' to the node t \u2208 T using one of the heuristics in Section 3. Other notation is defined in the text as it is used. Table 1 summarizes all heuristics used in our final system to recognize false entailment.", "cite_spans": [], "ref_spans": [ { "start": 168, "end": 170, "text": "t)", "ref_id": null }, { "start": 349, "end": 356, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Recognizing false entailment", "sec_num": "4" }, { "text": "ENTITY(h) \u2227 \u2200t.\u00acALIGN(h, t) \u2192 F alse. Negation Mismatch: ALIGN(h, t) \u2227 NEG(t) = NEG(h) \u2192 F alse. Modal Mismatch: ALIGN(h, t) \u2227 MOD(t) \u2227 \u00acMOD(h) \u2192 F alse. Antonym Match: ALIGN(h1, t1) \u2227 REL(h0, h1) \u2227 REL(t0, t1) \u2227 LEMMA(t0) \u2208 ANTONYMS(h0) \u2192 F alse Argument Movement: ALIGN(h1, t1) \u2227 ALIGN(h2, t2) \u2227 REL(h1, h2) \u2227 \u00acREL(t1, t2) \u2227 REL \u2208 {SUBJ, OBJ, IND} \u2192 F alse Superlative Mismatch: \u00ac(SUPR(h1) \u2192 (ALIGN(h1, t1) \u2227 ALIGN(h2, t2) \u2227 REL1(h2, h1) \u2227 REL1(t2, t1) \u2227\u2200t3.(REL2(t2, t3) \u2227 REL2 \u2208 {MOD,POSSR,LOCN} \u2192 REL2(h2, h3) \u2227 ALIGN(h3, t3))) \u2192 F alse Conditional Mismatch: ALIGN(h1, t1) \u2227 ALIGN(h2, t2) \u2227 COND \u2208 PATH(t1, t2) \u2227 COND / \u2208 PATH(h1, h2) \u2192 F alse", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recognizing false entailment", "sec_num": "4" }, { "text": "If some node h has been recognized as an entity (i.e., as a proper noun, quantity, or time) but has not been aligned to any node t, we predict that the entailment is false. For example, we predict that Test Ex. #1863 is false because the entities \"Suwariya\", \"20 miles\", and \"35\" in H are unaligned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unaligned entity", "sec_num": "4.1" }, { "text": "If any two nodes (h, t) are aligned, and one (and only one) of them is negated, we predict that the entailment is false. Negation is conveyed by the NEG feature in NLPWIN. This heuristic allows us to predict false entailment in the example \"Pertussis is not very contagious\" and \"...pertussis, is a highly contagious bacterial infection\" in Test Ex. #1144.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Negation mismatch", "sec_num": "4.2" }, { "text": "If any two nodes (h, t) are aligned, and t is modified by a modal auxiliary verb (e.g, can, might, should, etc.) but h is not similarly modified, we predict that the entailment is false. Modification by a modal auxiliary verb is conveyed by the MOD feature in NLP-WIN. This heuristic allows us to predict false entailment between the text phrase \"would constitute a threat to democracy\", and the hypothesis phrase \"constitutes a democratic threat\" in Test Ex. #1203.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modal auxiliary verb mismatch", "sec_num": "4.3" }, { "text": "If two aligned noun nodes (h 1 , t 1 ) are both subjects or both objects of verb nodes (h 0 , t 0 ) in their respective sentences, i.e., REL(h 0 , h 1 ) \u2227 REL(t 0 , t 1 ) \u2227 REL \u2208 {SUBJ,OBJ}, then we check for a verb antonym match between (h 0 , t 0 ). We construct the set of verb antonyms using WordNet; we consider the antonyms of h 0 to be the union of the antonyms of the first three senses of LEMMA(h 0 ), or of the nearest antonym-possessing hypernyms if those senses do not themselves have antonyms in WordNet. Explicitly our procedure for constructing the antonym set of a node h 0 is as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Antonym match", "sec_num": "4.4" }, { "text": "1. ANTONYMS(h 0 ) = {} 2. For each of the first three listed senses s of LEMMA(h 0 ) in WordNet:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Antonym match", "sec_num": "4.4" }, { "text": "(a) While |WN-ANTONYMS(s)| = 0 i. s \u2190 WN-HYPERNYM(s) (b) ANTONYMS(h 0 ) \u2190 ANTONYMS(h 0 ) \u222a WN-ANTONYMS(s)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Antonym match", "sec_num": "4.4" }, { "text": "In addition to the verb antonyms in WordNet, we detect the prepositional antonym pairs (before/after, to/from, and over/under). This heuristic allows us to predict false entailment between \"Black holes can lose mass...\" and \"Black holes can regain some of their mass...\" in Test Ex. #1445.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "return ANTONYMS(h 0 )", "sec_num": "3." }, { "text": "For any two aligned verb nodes (h 1 , t 1 ), we consider each noun child h 2 of h 1 possessing any of the subject, object, or indirect object relations to h 1 , i.e., there exists REL(h 1 , h 2 ) such that REL \u2208 {SUBJ, OBJ, IND}. If there is some node t 2 such that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Argument movement", "sec_num": "4.5" }, { "text": "ALIGN(h 2 , t 2 ), but REL(t 1 , t 2 ) = REL(h 1 , h 2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Argument movement", "sec_num": "4.5" }, { "text": ", then we predict that the entailment is false.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Argument movement", "sec_num": "4.5" }, { "text": "As an example, consider Figure 3 , representing subgraphs from Dev Ex. #1916: T : ...U.N. officials are also dismayed that Aristide killed a conference called by Prime Minister Robert Malval...", "cite_spans": [], "ref_spans": [ { "start": 24, "end": 32, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Argument movement", "sec_num": "4.5" }, { "text": "Here let (h 1 , t 1 ) correspond to the aligned verbs with lemma kill, where the object of h 1 has lemma Prime Minister Robert Malval, and the object of t 1 has lemma conference. Since h 2 is aligned to some node t 2 in the text graph, but \u00acOBJ(t 1 , t 2 ) , the sentence pair is rejected as a false entailment.", "cite_spans": [], "ref_spans": [ { "start": 236, "end": 256, "text": "but \u00acOBJ(t 1 , t 2 )", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "H: Aristide kills Prime Minister Robert Malval.", "sec_num": null }, { "text": "If some adjective node h 1 in the hypothesis is identified as a superlative, check that all of the following conditions are satisfied:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Superlative mismatch", "sec_num": "4.6" }, { "text": "1. h 1 is aligned to some superlative t 1 in the text sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Superlative mismatch", "sec_num": "4.6" }, { "text": "2. The noun phrase h 2 modified by h 1 is aligned to the noun phrase t 2 modified by t 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Superlative mismatch", "sec_num": "4.6" }, { "text": "3. Any additional modifier t 3 of the noun phrase t 2 is aligned to some modifier h 3 of h 2 in the hypothesis sentence (reverse subset match).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Superlative mismatch", "sec_num": "4.6" }, { "text": "If any of these conditions are not satisfied, we predict that the entailment is false. This heuristic allows us to predict false entailment in (Dev Ex. #908):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Superlative mismatch", "sec_num": "4.6" }, { "text": "T : Time Warner is the world's largest media and Internet company. H: Time Warner is the world's largest company.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Superlative mismatch", "sec_num": "4.6" }, { "text": "Here \"largest media and Internet company\" in T fails the reverse subset match (condition 3) to \"largest company\" in H.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Superlative mismatch", "sec_num": "4.6" }, { "text": "For any pair of aligned nodes (h 1 , t 1 ), if there exists a second pair of aligned nodes (h 2 , t 2 ) such that the shortest path PATH(t 1 , t 2 ) in the dependency graph T contains the conditional relation, then PATH(h 1 , h 2 ) must also contain the conditional relation, or else we predict that the entailment is false. For example, consider the following false entailment (Dev Ex. #60): T : If a Mexican approaches the border, he's assumed to be trying to illegally cross.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional mismatch", "sec_num": "4.7" }, { "text": "H: Mexicans continue to illegally cross border.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional mismatch", "sec_num": "4.7" }, { "text": "Here, \"Mexican\" and \"cross\" are aligned, and the path between them in the text contains the conditional relation, but does not in the hypothesis; thus the entailment is predicted to be false.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conditional mismatch", "sec_num": "4.7" }, { "text": "In addition to these heuristics, we additionally implemented an IS-A mismatch heuristic, which attempted to discover when an IS-A relation in the hypothesis sentence was not implied by a corresponding IS-A relation in the text; however, this heuristic yielded a loss in accuracy on the development set and was therefore not included in our final system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other heuristics for false entailment", "sec_num": "4.8" }, { "text": "In case none of the preceding heuristics for rejection are applicable, we back off to a lexical similarity model similar to that described in . For every content node h \u2208 H not already aligned by one of the heuristics in Section 3, we obtain a similarity score MN(h, t) from a similarity database that is constructed automatically from the data contained in MindNet 5 as described in (Richardson, 1997) . Our similarity function is thus:", "cite_spans": [ { "start": 384, "end": 402, "text": "(Richardson, 1997)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Lexical similarity using MindNet", "sec_num": "5.1" }, { "text": "sim(h, t) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 1 if ANY-ALIGN(h, t) MN(h, t) if MN(h, t) > min min otherwise", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical similarity using MindNet", "sec_num": "5.1" }, { "text": "Where the minimum score min is a parameter tuned for maximum accuracy on the development set; min = 0.00002 in our final system. We then compute the entailment score:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical similarity using MindNet", "sec_num": "5.1" }, { "text": "score(H, T ) = 1 |H| h\u2208H max t\u2208T sim(h, t)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical similarity using MindNet", "sec_num": "5.1" }, { "text": "This approach is identical to that used in , except that we use alignment heuristics and MindNet similarity scores in place of their web-based estimation of lexical entailment probabilities, and we take as our score the geometric mean of the component entailment scores rather than the unnormalized product of probabilities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical similarity using MindNet", "sec_num": "5.1" }, { "text": "The methods discussed so far for alignment are limited to aligning pairs of single words or multipleword units constituting single syntactic categories; these are insufficient for the problem of detecting more complicated paraphrases. For example, consider the following true entailment (Dev Ex. #496):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measuring phrasal similarity using the web", "sec_num": "5.2" }, { "text": "T : ...Muslims believe there is only one God. H: Muslims are monotheistic. Here we would like to align the hypothesis phrase \"are monotheistic\" to the text phrase \"believe there is only one God\"; unfortunately, single-node alignment aligns only the nodes with lemma \"Muslim\". In this section we describe the approach used in our system to approximate phrasal similarity via distributional information obtained using the MSN Search search engine.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measuring phrasal similarity using the web", "sec_num": "5.2" }, { "text": "We propose a metric for measuring phrasal similarity based on a phrasal version of the distributional hypothesis: we propose that a phrase template P h (e.g. 'x h are monotheistic') has high semantic similarity to a template P t (e.g. \"x t believe there is only one God\"), with possible \"slot-fillers\" x h and x t , respectively, if the overlap of the sets of observed slotfillers X h \u2229 X t for those phrase templates is high in some sufficiently large corpus (e.g., the Web).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measuring phrasal similarity using the web", "sec_num": "5.2" }, { "text": "To measure phrasal similarity we issue the surface text form of each candidate phrase template as a query to a web-based search engine, and parse the returned sentences in which the candidate phrase occurs to determine the appropriate slot-fillers. For example, in the above example, we observe the set of slot-fillers X t = {Muslims, Christians, Jews, Saivities, Sikhs, Caodaists, People}, and X h \u2229 X t = {Muslims, Christians, Jews, Sikhs, People}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measuring phrasal similarity using the web", "sec_num": "5.2" }, { "text": "Explicitly, given the text and hypothesis logical forms, our algorithm proceeds as follows to compute the phrasal similarity between all phrase templates in H and T :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measuring phrasal similarity using the web", "sec_num": "5.2" }, { "text": "1. For each pair of aligned single node and unaligned leaf node (t 1 , t l ) (or pair of aligned nodes (t 1 , t 2 )) in the text T :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measuring phrasal similarity using the web", "sec_num": "5.2" }, { "text": "(a) Use NLPWIN to generate a surface text string S from the underlying logical form PATH(t 1 , t 2 ). (b) Create the surface string template phrase P t by removing from S the lemmas corresponding to t 1 (and t 2 , if path is between aligned nodes). (c) Perform a web search for the string P t . (d) Parse the resulting sentences containing P t and extract all non-pronoun slot fillers x t \u2208 X t that satisfy the same syntactic roles as t 1 in the original sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measuring phrasal similarity using the web", "sec_num": "5.2" }, { "text": "2. Similarly, extract the slot fillers X h for each discovered phrase template P h in H.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measuring phrasal similarity using the web", "sec_num": "5.2" }, { "text": "3. Calculate paraphrase similarity as a function of the overlap between the slot-filler sets X t and X h , i.e: score(P h , P t ) = |X h \u2229X t | |Xt| .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measuring phrasal similarity using the web", "sec_num": "5.2" }, { "text": "We then incorporate paraphrase similarity within the lexical similarity model by allowing, for some unaligned node h \u2208 P h , where t \u2208 P t : sim(h, t) = max(MN(h, t), score(P h , P t ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Measuring phrasal similarity using the web", "sec_num": "5.2" }, { "text": "Our approach to paraphrase detection is most similar to the TE/ASE algorithm (Szpektor et al., 2004) , and bears similarity to both DIRT (Lin and Pantel, 2001) and KnowItAll (Etzioni et al., 2004) . The chief difference in our algorithm is that we generate the surface text search strings from the parsed logical forms using the generation capabilities of NLPWIN (Aikawa et al., 2001 ), and we verify that the syntactic relations in each discovered web snippet are isomorphic to those in the original candidate paraphrase template.", "cite_spans": [ { "start": 77, "end": 100, "text": "(Szpektor et al., 2004)", "ref_id": "BIBREF12" }, { "start": 137, "end": 159, "text": "(Lin and Pantel, 2001)", "ref_id": "BIBREF9" }, { "start": 174, "end": 196, "text": "(Etzioni et al., 2004)", "ref_id": "BIBREF4" }, { "start": 363, "end": 383, "text": "(Aikawa et al., 2001", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Measuring phrasal similarity using the web", "sec_num": "5.2" }, { "text": "In this section we present the final results of our system on the PASCAL RTE-1 test set, and examine our features in an ablation study. The PASCAL RTE-1 development and test sets consist of 567 and 800 examples, respectively, with the test set split equally between true and false examples. Table 2 displays the accuracy and confidenceweighted score 6 (CWS) of our final system on each of the tasks for both the development and test sets. Our overall test set accuracy of 62.50% represents a 2.1% absolute improvement over the task-independent system described in (Tatu and Moldovan, 2005) , and a 20.2% relative improvement in accuracy over their system with respect to an uninformed baseline accuracy of 50%.", "cite_spans": [ { "start": 564, "end": 589, "text": "(Tatu and Moldovan, 2005)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 291, "end": 298, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "6" }, { "text": "To compute confidence scores for our judgments, any entailment determined to be false by any heuristic was assigned maximum confidence; no attempts were made to distinguish between entailments rejected by different heuristics. The confidence of all other predictions was calculated as the absolute value in the difference between the output score(H, T ) of the lexical similarity model and the threshold t = 0.1285 as tuned for highest accuracy on our development set. We would expect a higher CWS to result from learning a more appropriate confidence function; nonetheless our overall that five were helpful in varying degrees on our test set, but that removal of either MindNet similarity scores or paraphrase detection resulted in no accuracy loss on the test set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Performance Comparison on the PASCAL RTE-1 Test Set", "sec_num": "6.1" }, { "text": "Of the six false entailment heuristics used in the final system, five resulted in an accuracy improvement on the test set (the most effective by far was the \"Argument Movement\", resulting in a net gain of 20 correctly-classified false examples); inclusion of the \"Superlative Mismatch\" feature resulted in a small net loss of two examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Performance Comparison on the PASCAL RTE-1 Test Set", "sec_num": "6.1" }, { "text": "We note that our heuristics for false entailment, where applicable, were indeed significantly more accurate than our final system as a whole; on the set of examples predicted false by our heuristics we had 71.3% accuracy on the training set (112 correct out of 157 predicted), and 72.9% accuracy on the test set (164 correct out of 225 predicted).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Performance Comparison on the PASCAL RTE-1 Test Set", "sec_num": "6.1" }, { "text": "In this paper we have presented and analyzed a system for recognizing textual entailment focused primarily on the recognition of false entailment, and demonstrated higher performance than achieved by previous approaches on the widely-used PASCAL RTE test set. Our system achieves state-of-theart performance despite not exploiting a wide array of sources of knowledge used by other highperformance systems; we submit that the performance of our system demonstrates the unexploited potential in features designed specifically for the recognition of false entailment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "http://www.pascal-network.org/Challenges/RTE. The examples given throughout this paper are from the first PASCAL RTE dataset, described in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "List of adjectival forms and demonyms based on the list at: http://en.wikipedia.org/wiki/List of demonyms", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://research.microsoft.com/mnex", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "we compute the confidence-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "As discussed in Section 2, features with no effect on development set accuracy were included in the system if and only if they improved the system's unweighted F-score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank Chris Brockett, Michael Gamon, Gary Kacmarick, and Chris Quirk for helpful discussion. Also, thanks to Robert Ragno for assistance with the MSN Search API. Rion Snow is supported by an NDSEG Fellowship sponsored by the DOD and AFOSR.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "weighted score (or \"average precision\") over n examples {c 1 , c 2 , ..., c n } ranked in order of decreasing confidence as Table 3 : Feature ablation study; quantity is the accuracy loss obtained by removal of single feature test set CWS of 0.6534 is higher than previouslyreported task-independent systems (however, the task-dependent system reported in (Raina et al., 2005) achieves a CWS of 0.686). Table 3 displays the results of our feature ablation study, analyzing the individual effect of each feature. Of the seven heuristics used in our final system for node alignment (including lexical similarity and paraphrase detection), our ablation study showed", "cite_spans": [ { "start": 356, "end": 376, "text": "(Raina et al., 2005)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 124, "end": 131, "text": "Table 3", "ref_id": null }, { "start": 403, "end": 410, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Multilingual Sentence Generation", "authors": [ { "first": "Takako", "middle": [], "last": "Aikawa", "suffix": "" }, { "first": "Maite", "middle": [], "last": "Melero", "suffix": "" }, { "first": "Lee", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Andi", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2001, "venue": "Proc. of 8 th European Workshop on Natural Language Generation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Takako Aikawa, Maite Melero, Lee Schwartz, and Andi Wu. 2001. Multilingual Sentence Generation. In Proc. of 8 th European Workshop on Natural Language Generation.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "MITRE's Submissions to the EU Pascal RTE Challenge", "authors": [ { "first": "Samuel", "middle": [], "last": "Bayer", "suffix": "" }, { "first": "John", "middle": [], "last": "Burger", "suffix": "" }, { "first": "Lisa", "middle": [], "last": "Ferro", "suffix": "" }, { "first": "John", "middle": [], "last": "Henderson", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Yeh", "suffix": "" } ], "year": 2005, "venue": "Proc. of the PASCAL Challenges Workshop on RTE", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samuel Bayer, John Burger, Lisa Ferro, John Henderson, and Alexander Yeh. 2005. MITRE's Submissions to the EU Pascal RTE Challenge. In Proc. of the PASCAL Challenges Workshop on RTE 2005.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Recognizing Textual Entailment with Logical Inference", "authors": [ { "first": "Johan", "middle": [], "last": "Bos", "suffix": "" }, { "first": "Katja", "middle": [], "last": "Markert", "suffix": "" } ], "year": 2005, "venue": "Proc. HLT-EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johan Bos and Katja Markert. 2005. Recognizing Tex- tual Entailment with Logical Inference. In Proc. HLT- EMNLP 2005.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The PASCAL Recognising Textual Entailment Challenge", "authors": [ { "first": "Oren", "middle": [], "last": "Ido Dagan", "suffix": "" }, { "first": "Bernardo", "middle": [], "last": "Glickman", "suffix": "" }, { "first": "", "middle": [], "last": "Magnini", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the PASCAL Challenges Workshop on RTE", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The PASCAL Recognising Textual Entailment Challenge. In Proceedings of the PASCAL Challenges Workshop on RTE 2005.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Web-scale information extraction in KnowItAll", "authors": [ { "first": "Oren", "middle": [], "last": "Etzioni", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Cafarella", "suffix": "" }, { "first": "Doug", "middle": [], "last": "Downey", "suffix": "" }, { "first": "Stanley", "middle": [], "last": "Kok", "suffix": "" }, { "first": "Ana-Maria", "middle": [], "last": "Popescu", "suffix": "" } ], "year": 2004, "venue": "Proc. WWW", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oren Etzioni, Michael Cafarella, Doug Downey, Stanley Kok, Ana-Maria Popescu, Tal Shaked, Stephen Soder- land, Daniel S. Weld, and Alexander Yates. 2004. Web-scale information extraction in KnowItAll. In Proc. WWW 2004.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "WordNet: An Electronic Lexical Database", "authors": [], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christiane Fellbaum, editor. 1998. WordNet: An Elec- tronic Lexical Database. MIT Press, Cambridge, Mass.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Web Based Probabilistic Textual Entailment", "authors": [ { "first": "Oren", "middle": [], "last": "Glickman", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Moshe", "middle": [], "last": "Koppel", "suffix": "" } ], "year": 2005, "venue": "Proc. of the PASCAL Challenges Workshop on RTE", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oren Glickman, Ido Dagan, and Moshe Koppel. 2005. Web Based Probabilistic Textual Entailment. In Proc. of the PASCAL Challenges Workshop on RTE 2005.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Intelligent Writing Assistance", "authors": [ { "first": "E", "middle": [], "last": "George", "suffix": "" }, { "first": "", "middle": [], "last": "Heidorn", "suffix": "" } ], "year": 2000, "venue": "A Handbook of Natural Language Processing: Techniques and Applications for the Processing of Language as Text. Marcel Dekker", "volume": "", "issue": "", "pages": "181--207", "other_ids": {}, "num": null, "urls": [], "raw_text": "George E. Heidorn. 2000. Intelligent Writing Assis- tance. In R. Dale, H. Moisl, and H. Somers (eds.), A Handbook of Natural Language Processing: Tech- niques and Applications for the Processing of Lan- guage as Text. Marcel Dekker, New York. 181-207.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Textual Entailment Recognision Based on Dependency Analysis and WordNet", "authors": [ { "first": "Jes\u00fas", "middle": [], "last": "Herrera", "suffix": "" }, { "first": "Anselmo", "middle": [], "last": "Pe\u00f1as", "suffix": "" }, { "first": "Felisa", "middle": [], "last": "Verdejo", "suffix": "" } ], "year": 2005, "venue": "Proc. of the PASCAL Challenges Workshop on RTE", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jes\u00fas Herrera, Anselmo Pe\u00f1as, and Felisa Verdejo. 2005. Textual Entailment Recognision Based on Depen- dency Analysis and WordNet. In Proc. of the PASCAL Challenges Workshop on RTE 2005.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "DIRT -Discovery of Inference Rules from Text", "authors": [ { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" } ], "year": 2001, "venue": "Proc. KDD", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekang Lin and Patrick Pantel. 2001. DIRT -Discovery of Inference Rules from Text. In Proc. KDD 2001.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Robust textual inference via learning and abductive reasoning", "authors": [ { "first": "Rajat", "middle": [], "last": "Raina", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2005, "venue": "Proc. AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rajat Raina, Andrew Y. Ng, and Christopher D. Man- ning. 2005. Robust textual inference via learning and abductive reasoning. In Proc. AAAI 2005.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Determining Similarity and Inferring Relations in a Lexical Knowledge Base", "authors": [ { "first": "D", "middle": [], "last": "Stephen", "suffix": "" }, { "first": "", "middle": [], "last": "Richardson", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen D. Richardson. 1997. Determining Similarity and Inferring Relations in a Lexical Knowledge Base. Ph.D. thesis, The City University of New York.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Scaling Web-based Acquisition of Entailment Relations", "authors": [ { "first": "Idan", "middle": [], "last": "Szpektor", "suffix": "" }, { "first": "Hristo", "middle": [], "last": "Tanev", "suffix": "" }, { "first": "Ido", "middle": [], "last": "Dagan", "suffix": "" }, { "first": "Bonaventura", "middle": [], "last": "Coppola", "suffix": "" } ], "year": 2004, "venue": "Proc. EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Idan Szpektor, Hristo Tanev, Ido Dagan, and Bonaventura Coppola. 2004. Scaling Web-based Acquisition of Entailment Relations. In Proc. EMNLP 2004.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A Semantic Approach to Recognizing Textual Entailment", "authors": [ { "first": "Marta", "middle": [], "last": "Tatu", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Moldovan", "suffix": "" } ], "year": 2005, "venue": "Proc. HLT-EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marta Tatu and Dan Moldovan. 2005. A Semantic Ap- proach to Recognizing Textual Entailment. In Proc. HLT-EMNLP 2005.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "What Syntax Can Contribute in the Entailment Task", "authors": [ { "first": "Lucy", "middle": [], "last": "Vanderwende", "suffix": "" }, { "first": "B", "middle": [], "last": "William", "suffix": "" }, { "first": "", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2006, "venue": "MLCW 2005", "volume": "3944", "issue": "", "pages": "205--216", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lucy Vanderwende and William B. Dolan. 2006. What Syntax Can Contribute in the Entailment Task. In MLCW 2005, LNAI 3944, pp. 205-216. J. Quinonero- Candela et al. (eds.). Springer-Verlag.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": "Hypothesis: ''Hepburn, who won four Oscars...'' Text: ''Hepburn, a four-time Academy Award winner...Example of synonym, value, and derivational form alignment heuristics, Dev Ex. #767" }, "FIGREF1": { "num": null, "type_str": "figure", "uris": null, "text": "Example of object movement signaling false entailment" }, "TABREF1": { "html": null, "text": "Summary of heuristics for recognizing false entailment of each semantic node feature recognized by NLP-WIN; e.g., if h is negated, we state that NEG(h) = TRUE. Similarly we assign binary functions for the existence of each syntactic relation defined over pairs of nodes. Finally, we define the function", "type_str": "table", "num": null, "content": "" } } } }